Jan 23 17:54:47 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 23 17:54:47 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 23 17:54:47 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 23 17:54:47 localhost kernel: BIOS-provided physical RAM map:
Jan 23 17:54:47 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 23 17:54:47 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 23 17:54:47 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 23 17:54:47 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 23 17:54:47 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 23 17:54:47 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 23 17:54:47 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 23 17:54:47 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 23 17:54:47 localhost kernel: NX (Execute Disable) protection: active
Jan 23 17:54:47 localhost kernel: APIC: Static calls initialized
Jan 23 17:54:47 localhost kernel: SMBIOS 2.8 present.
Jan 23 17:54:47 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 23 17:54:47 localhost kernel: Hypervisor detected: KVM
Jan 23 17:54:47 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 23 17:54:47 localhost kernel: kvm-clock: using sched offset of 4159069347 cycles
Jan 23 17:54:47 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 23 17:54:47 localhost kernel: tsc: Detected 2799.998 MHz processor
Jan 23 17:54:47 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 23 17:54:47 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 23 17:54:47 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 23 17:54:47 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 23 17:54:47 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 23 17:54:47 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 23 17:54:47 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 23 17:54:47 localhost kernel: Using GB pages for direct mapping
Jan 23 17:54:47 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 23 17:54:47 localhost kernel: ACPI: Early table checksum verification disabled
Jan 23 17:54:47 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 23 17:54:47 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 17:54:47 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 17:54:47 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 17:54:47 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 23 17:54:47 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 17:54:47 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 17:54:47 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 23 17:54:47 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 23 17:54:47 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 23 17:54:47 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 23 17:54:47 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 23 17:54:47 localhost kernel: No NUMA configuration found
Jan 23 17:54:47 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 23 17:54:47 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 23 17:54:47 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 23 17:54:47 localhost kernel: Zone ranges:
Jan 23 17:54:47 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 23 17:54:47 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 23 17:54:47 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 23 17:54:47 localhost kernel:   Device   empty
Jan 23 17:54:47 localhost kernel: Movable zone start for each node
Jan 23 17:54:47 localhost kernel: Early memory node ranges
Jan 23 17:54:47 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 23 17:54:47 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 23 17:54:47 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 23 17:54:47 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 23 17:54:47 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 23 17:54:47 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 23 17:54:47 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 23 17:54:47 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 23 17:54:47 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 23 17:54:47 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 23 17:54:47 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 23 17:54:47 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 23 17:54:47 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 23 17:54:47 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 23 17:54:47 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 23 17:54:47 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 23 17:54:47 localhost kernel: TSC deadline timer available
Jan 23 17:54:47 localhost kernel: CPU topo: Max. logical packages:   8
Jan 23 17:54:47 localhost kernel: CPU topo: Max. logical dies:       8
Jan 23 17:54:47 localhost kernel: CPU topo: Max. dies per package:   1
Jan 23 17:54:47 localhost kernel: CPU topo: Max. threads per core:   1
Jan 23 17:54:47 localhost kernel: CPU topo: Num. cores per package:     1
Jan 23 17:54:47 localhost kernel: CPU topo: Num. threads per package:   1
Jan 23 17:54:47 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 23 17:54:47 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 23 17:54:47 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 23 17:54:47 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 23 17:54:47 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 23 17:54:47 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 23 17:54:47 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 23 17:54:47 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 23 17:54:47 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 23 17:54:47 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 23 17:54:47 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 23 17:54:47 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 23 17:54:47 localhost kernel: Booting paravirtualized kernel on KVM
Jan 23 17:54:47 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 23 17:54:47 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 23 17:54:47 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 23 17:54:47 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 23 17:54:47 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 23 17:54:47 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 23 17:54:47 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 23 17:54:47 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 23 17:54:47 localhost kernel: random: crng init done
Jan 23 17:54:47 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 23 17:54:47 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 23 17:54:47 localhost kernel: Fallback order for Node 0: 0 
Jan 23 17:54:47 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 23 17:54:47 localhost kernel: Policy zone: Normal
Jan 23 17:54:47 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 23 17:54:47 localhost kernel: software IO TLB: area num 8.
Jan 23 17:54:47 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 23 17:54:47 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 23 17:54:47 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 23 17:54:47 localhost kernel: Dynamic Preempt: voluntary
Jan 23 17:54:47 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 23 17:54:47 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 23 17:54:47 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 23 17:54:47 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 23 17:54:47 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 23 17:54:47 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 23 17:54:47 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 23 17:54:47 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 23 17:54:47 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 23 17:54:47 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 23 17:54:47 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 23 17:54:47 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 23 17:54:47 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 23 17:54:47 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 23 17:54:47 localhost kernel: Console: colour VGA+ 80x25
Jan 23 17:54:47 localhost kernel: printk: console [ttyS0] enabled
Jan 23 17:54:47 localhost kernel: ACPI: Core revision 20230331
Jan 23 17:54:47 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 23 17:54:47 localhost kernel: x2apic enabled
Jan 23 17:54:47 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 23 17:54:47 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 23 17:54:47 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 23 17:54:47 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 23 17:54:47 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 23 17:54:47 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 23 17:54:47 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 23 17:54:47 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 23 17:54:47 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 23 17:54:47 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 23 17:54:47 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 23 17:54:47 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 23 17:54:47 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 23 17:54:47 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 23 17:54:47 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 23 17:54:47 localhost kernel: x86/bugs: return thunk changed
Jan 23 17:54:47 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 23 17:54:47 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 23 17:54:47 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 23 17:54:47 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 23 17:54:47 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 23 17:54:47 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 23 17:54:47 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 23 17:54:47 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 23 17:54:47 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 23 17:54:47 localhost kernel: landlock: Up and running.
Jan 23 17:54:47 localhost kernel: Yama: becoming mindful.
Jan 23 17:54:47 localhost kernel: SELinux:  Initializing.
Jan 23 17:54:47 localhost kernel: LSM support for eBPF active
Jan 23 17:54:47 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 23 17:54:47 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 23 17:54:47 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 23 17:54:47 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 23 17:54:47 localhost kernel: ... version:                0
Jan 23 17:54:47 localhost kernel: ... bit width:              48
Jan 23 17:54:47 localhost kernel: ... generic registers:      6
Jan 23 17:54:47 localhost kernel: ... value mask:             0000ffffffffffff
Jan 23 17:54:47 localhost kernel: ... max period:             00007fffffffffff
Jan 23 17:54:47 localhost kernel: ... fixed-purpose events:   0
Jan 23 17:54:47 localhost kernel: ... event mask:             000000000000003f
Jan 23 17:54:47 localhost kernel: signal: max sigframe size: 1776
Jan 23 17:54:47 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 23 17:54:47 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 23 17:54:47 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 23 17:54:47 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 23 17:54:47 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 23 17:54:47 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 23 17:54:47 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 23 17:54:47 localhost kernel: node 0 deferred pages initialised in 82ms
Jan 23 17:54:47 localhost kernel: Memory: 7763736K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618364K reserved, 0K cma-reserved)
Jan 23 17:54:47 localhost kernel: devtmpfs: initialized
Jan 23 17:54:47 localhost kernel: x86/mm: Memory block size: 128MB
Jan 23 17:54:47 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 23 17:54:47 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 23 17:54:47 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 23 17:54:47 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 23 17:54:47 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 23 17:54:47 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 23 17:54:47 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 23 17:54:47 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 23 17:54:47 localhost kernel: audit: type=2000 audit(1769190884.672:1): state=initialized audit_enabled=0 res=1
Jan 23 17:54:47 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 23 17:54:47 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 23 17:54:47 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 23 17:54:47 localhost kernel: cpuidle: using governor menu
Jan 23 17:54:47 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 23 17:54:47 localhost kernel: PCI: Using configuration type 1 for base access
Jan 23 17:54:47 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 23 17:54:47 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 23 17:54:47 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 23 17:54:47 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 23 17:54:47 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 23 17:54:47 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 23 17:54:47 localhost kernel: Demotion targets for Node 0: null
Jan 23 17:54:47 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 23 17:54:47 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 23 17:54:47 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 23 17:54:47 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 23 17:54:47 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 23 17:54:47 localhost kernel: ACPI: Interpreter enabled
Jan 23 17:54:47 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 23 17:54:47 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 23 17:54:47 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 23 17:54:47 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 23 17:54:47 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 23 17:54:47 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 23 17:54:47 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [3] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [4] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [5] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [6] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [7] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [8] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [9] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [10] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [11] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [12] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [13] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [14] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [15] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [16] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [17] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [18] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [19] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [20] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [21] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [22] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [23] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [24] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [25] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [26] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [27] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [28] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [29] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [30] registered
Jan 23 17:54:47 localhost kernel: acpiphp: Slot [31] registered
Jan 23 17:54:47 localhost kernel: PCI host bridge to bus 0000:00
Jan 23 17:54:47 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 23 17:54:47 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 23 17:54:47 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 23 17:54:47 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 23 17:54:47 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 23 17:54:47 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 23 17:54:47 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 23 17:54:47 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 23 17:54:47 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 23 17:54:47 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 23 17:54:47 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 23 17:54:47 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 23 17:54:47 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 23 17:54:47 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 23 17:54:47 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 23 17:54:47 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 23 17:54:47 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 23 17:54:47 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 23 17:54:47 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 23 17:54:47 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 23 17:54:47 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 23 17:54:47 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 23 17:54:47 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 23 17:54:47 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 23 17:54:47 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 23 17:54:47 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 23 17:54:47 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 23 17:54:47 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 23 17:54:47 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 23 17:54:47 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 23 17:54:47 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 23 17:54:47 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 23 17:54:47 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 23 17:54:47 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 23 17:54:47 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 23 17:54:47 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 23 17:54:47 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 23 17:54:47 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 23 17:54:47 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 23 17:54:47 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 23 17:54:47 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 23 17:54:47 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 23 17:54:47 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 23 17:54:47 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 23 17:54:47 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 23 17:54:47 localhost kernel: iommu: Default domain type: Translated
Jan 23 17:54:47 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 23 17:54:47 localhost kernel: SCSI subsystem initialized
Jan 23 17:54:47 localhost kernel: ACPI: bus type USB registered
Jan 23 17:54:47 localhost kernel: usbcore: registered new interface driver usbfs
Jan 23 17:54:47 localhost kernel: usbcore: registered new interface driver hub
Jan 23 17:54:47 localhost kernel: usbcore: registered new device driver usb
Jan 23 17:54:47 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 23 17:54:47 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 23 17:54:47 localhost kernel: PTP clock support registered
Jan 23 17:54:47 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 23 17:54:47 localhost kernel: NetLabel: Initializing
Jan 23 17:54:47 localhost kernel: NetLabel:  domain hash size = 128
Jan 23 17:54:47 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 23 17:54:47 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 23 17:54:47 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 23 17:54:47 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 23 17:54:47 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 23 17:54:47 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 23 17:54:47 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 23 17:54:47 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 23 17:54:47 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 23 17:54:47 localhost kernel: vgaarb: loaded
Jan 23 17:54:47 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 23 17:54:47 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 23 17:54:47 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 23 17:54:47 localhost kernel: pnp: PnP ACPI init
Jan 23 17:54:47 localhost kernel: pnp 00:03: [dma 2]
Jan 23 17:54:47 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 23 17:54:47 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 23 17:54:47 localhost kernel: NET: Registered PF_INET protocol family
Jan 23 17:54:47 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 23 17:54:47 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 23 17:54:47 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 23 17:54:47 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 23 17:54:47 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 23 17:54:47 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 23 17:54:47 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 23 17:54:47 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 23 17:54:47 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 23 17:54:47 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 23 17:54:47 localhost kernel: NET: Registered PF_XDP protocol family
Jan 23 17:54:47 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 23 17:54:47 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 23 17:54:47 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 23 17:54:47 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 23 17:54:47 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 23 17:54:47 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 23 17:54:47 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 23 17:54:47 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 23 17:54:47 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 72034 usecs
Jan 23 17:54:47 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 23 17:54:47 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 23 17:54:47 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 23 17:54:47 localhost kernel: ACPI: bus type thunderbolt registered
Jan 23 17:54:47 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 23 17:54:47 localhost kernel: Initialise system trusted keyrings
Jan 23 17:54:47 localhost kernel: Key type blacklist registered
Jan 23 17:54:47 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 23 17:54:47 localhost kernel: zbud: loaded
Jan 23 17:54:47 localhost kernel: integrity: Platform Keyring initialized
Jan 23 17:54:47 localhost kernel: integrity: Machine keyring initialized
Jan 23 17:54:47 localhost kernel: Freeing initrd memory: 87956K
Jan 23 17:54:47 localhost kernel: NET: Registered PF_ALG protocol family
Jan 23 17:54:47 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 23 17:54:47 localhost kernel: Key type asymmetric registered
Jan 23 17:54:47 localhost kernel: Asymmetric key parser 'x509' registered
Jan 23 17:54:47 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 23 17:54:47 localhost kernel: io scheduler mq-deadline registered
Jan 23 17:54:47 localhost kernel: io scheduler kyber registered
Jan 23 17:54:47 localhost kernel: io scheduler bfq registered
Jan 23 17:54:47 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 23 17:54:47 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 23 17:54:47 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 23 17:54:47 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 23 17:54:47 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 23 17:54:47 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 23 17:54:47 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 23 17:54:47 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 23 17:54:47 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 23 17:54:47 localhost kernel: Non-volatile memory driver v1.3
Jan 23 17:54:47 localhost kernel: rdac: device handler registered
Jan 23 17:54:47 localhost kernel: hp_sw: device handler registered
Jan 23 17:54:47 localhost kernel: emc: device handler registered
Jan 23 17:54:47 localhost kernel: alua: device handler registered
Jan 23 17:54:47 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 23 17:54:47 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 23 17:54:47 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 23 17:54:47 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 23 17:54:47 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 23 17:54:47 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 23 17:54:47 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 23 17:54:47 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 23 17:54:47 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 23 17:54:47 localhost kernel: hub 1-0:1.0: USB hub found
Jan 23 17:54:47 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 23 17:54:47 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 23 17:54:47 localhost kernel: usbserial: USB Serial support registered for generic
Jan 23 17:54:47 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 23 17:54:47 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 23 17:54:47 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 23 17:54:47 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 23 17:54:47 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 23 17:54:47 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 23 17:54:47 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 23 17:54:47 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T17:54:46 UTC (1769190886)
Jan 23 17:54:47 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 23 17:54:47 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 23 17:54:47 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 23 17:54:47 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 23 17:54:47 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 23 17:54:47 localhost kernel: usbcore: registered new interface driver usbhid
Jan 23 17:54:47 localhost kernel: usbhid: USB HID core driver
Jan 23 17:54:47 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 23 17:54:47 localhost kernel: Initializing XFRM netlink socket
Jan 23 17:54:47 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 23 17:54:47 localhost kernel: Segment Routing with IPv6
Jan 23 17:54:47 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 23 17:54:47 localhost kernel: mpls_gso: MPLS GSO support
Jan 23 17:54:47 localhost kernel: IPI shorthand broadcast: enabled
Jan 23 17:54:47 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 23 17:54:47 localhost kernel: AES CTR mode by8 optimization enabled
Jan 23 17:54:47 localhost kernel: sched_clock: Marking stable (2591004157, 156625882)->(2972172723, -224542684)
Jan 23 17:54:47 localhost kernel: registered taskstats version 1
Jan 23 17:54:47 localhost kernel: Loading compiled-in X.509 certificates
Jan 23 17:54:47 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 23 17:54:47 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 23 17:54:47 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 23 17:54:47 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 23 17:54:47 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 23 17:54:47 localhost kernel: Demotion targets for Node 0: null
Jan 23 17:54:47 localhost kernel: page_owner is disabled
Jan 23 17:54:47 localhost kernel: Key type .fscrypt registered
Jan 23 17:54:47 localhost kernel: Key type fscrypt-provisioning registered
Jan 23 17:54:47 localhost kernel: Key type big_key registered
Jan 23 17:54:47 localhost kernel: Key type encrypted registered
Jan 23 17:54:47 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 23 17:54:47 localhost kernel: Loading compiled-in module X.509 certificates
Jan 23 17:54:47 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 23 17:54:47 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 23 17:54:47 localhost kernel: ima: No architecture policies found
Jan 23 17:54:47 localhost kernel: evm: Initialising EVM extended attributes:
Jan 23 17:54:47 localhost kernel: evm: security.selinux
Jan 23 17:54:47 localhost kernel: evm: security.SMACK64 (disabled)
Jan 23 17:54:47 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 23 17:54:47 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 23 17:54:47 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 23 17:54:47 localhost kernel: evm: security.apparmor (disabled)
Jan 23 17:54:47 localhost kernel: evm: security.ima
Jan 23 17:54:47 localhost kernel: evm: security.capability
Jan 23 17:54:47 localhost kernel: evm: HMAC attrs: 0x1
Jan 23 17:54:47 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 23 17:54:47 localhost kernel: Running certificate verification RSA selftest
Jan 23 17:54:47 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 23 17:54:47 localhost kernel: Running certificate verification ECDSA selftest
Jan 23 17:54:47 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 23 17:54:47 localhost kernel: clk: Disabling unused clocks
Jan 23 17:54:47 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 23 17:54:47 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 23 17:54:47 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 23 17:54:47 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 23 17:54:47 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 23 17:54:47 localhost kernel: Run /init as init process
Jan 23 17:54:47 localhost kernel:   with arguments:
Jan 23 17:54:47 localhost kernel:     /init
Jan 23 17:54:47 localhost kernel:   with environment:
Jan 23 17:54:47 localhost kernel:     HOME=/
Jan 23 17:54:47 localhost kernel:     TERM=linux
Jan 23 17:54:47 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 23 17:54:47 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 23 17:54:47 localhost systemd[1]: Detected virtualization kvm.
Jan 23 17:54:47 localhost systemd[1]: Detected architecture x86-64.
Jan 23 17:54:47 localhost systemd[1]: Running in initrd.
Jan 23 17:54:47 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 23 17:54:47 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 23 17:54:47 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 23 17:54:47 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 23 17:54:47 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 23 17:54:47 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 23 17:54:47 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 23 17:54:47 localhost systemd[1]: No hostname configured, using default hostname.
Jan 23 17:54:47 localhost systemd[1]: Hostname set to <localhost>.
Jan 23 17:54:47 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 23 17:54:47 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 23 17:54:47 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 23 17:54:47 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 23 17:54:47 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 23 17:54:47 localhost systemd[1]: Reached target Local File Systems.
Jan 23 17:54:47 localhost systemd[1]: Reached target Path Units.
Jan 23 17:54:47 localhost systemd[1]: Reached target Slice Units.
Jan 23 17:54:47 localhost systemd[1]: Reached target Swaps.
Jan 23 17:54:47 localhost systemd[1]: Reached target Timer Units.
Jan 23 17:54:47 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 23 17:54:47 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 23 17:54:47 localhost systemd[1]: Listening on Journal Socket.
Jan 23 17:54:47 localhost systemd[1]: Listening on udev Control Socket.
Jan 23 17:54:47 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 23 17:54:47 localhost systemd[1]: Reached target Socket Units.
Jan 23 17:54:47 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 23 17:54:47 localhost systemd[1]: Starting Journal Service...
Jan 23 17:54:47 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 23 17:54:47 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 23 17:54:47 localhost systemd[1]: Starting Create System Users...
Jan 23 17:54:47 localhost systemd[1]: Starting Setup Virtual Console...
Jan 23 17:54:47 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 23 17:54:47 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 23 17:54:47 localhost systemd[1]: Finished Create System Users.
Jan 23 17:54:47 localhost systemd-journald[306]: Journal started
Jan 23 17:54:47 localhost systemd-journald[306]: Runtime Journal (/run/log/journal/3ba9981f5b2a471888b1acd426df24a5) is 8.0M, max 153.6M, 145.6M free.
Jan 23 17:54:47 localhost systemd-sysusers[310]: Creating group 'users' with GID 100.
Jan 23 17:54:47 localhost systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Jan 23 17:54:47 localhost systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 23 17:54:47 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 23 17:54:47 localhost systemd[1]: Started Journal Service.
Jan 23 17:54:47 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 23 17:54:47 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 23 17:54:47 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 23 17:54:47 localhost systemd[1]: Finished Setup Virtual Console.
Jan 23 17:54:47 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 23 17:54:47 localhost systemd[1]: Starting dracut cmdline hook...
Jan 23 17:54:47 localhost dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Jan 23 17:54:47 localhost dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 23 17:54:47 localhost systemd[1]: Finished dracut cmdline hook.
Jan 23 17:54:47 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 23 17:54:47 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 23 17:54:47 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 23 17:54:47 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 23 17:54:47 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 23 17:54:47 localhost kernel: RPC: Registered udp transport module.
Jan 23 17:54:47 localhost kernel: RPC: Registered tcp transport module.
Jan 23 17:54:47 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 23 17:54:47 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 23 17:54:47 localhost rpc.statd[442]: Version 2.5.4 starting
Jan 23 17:54:47 localhost rpc.statd[442]: Initializing NSM state
Jan 23 17:54:47 localhost rpc.idmapd[447]: Setting log level to 0
Jan 23 17:54:47 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 23 17:54:47 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 23 17:54:47 localhost systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Jan 23 17:54:47 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 23 17:54:47 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 23 17:54:47 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 23 17:54:47 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 23 17:54:47 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 23 17:54:47 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 23 17:54:47 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 23 17:54:47 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 23 17:54:47 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 23 17:54:47 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 23 17:54:47 localhost systemd[1]: Reached target Network.
Jan 23 17:54:47 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 23 17:54:48 localhost systemd[1]: Starting dracut initqueue hook...
Jan 23 17:54:48 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 23 17:54:48 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 23 17:54:48 localhost kernel:  vda: vda1
Jan 23 17:54:48 localhost kernel: libata version 3.00 loaded.
Jan 23 17:54:48 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 23 17:54:48 localhost kernel: scsi host0: ata_piix
Jan 23 17:54:48 localhost kernel: scsi host1: ata_piix
Jan 23 17:54:48 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 23 17:54:48 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 23 17:54:48 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 23 17:54:48 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 23 17:54:48 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 23 17:54:48 localhost systemd[1]: Reached target Initrd Root Device.
Jan 23 17:54:48 localhost systemd[1]: Reached target System Initialization.
Jan 23 17:54:48 localhost systemd[1]: Reached target Basic System.
Jan 23 17:54:48 localhost kernel: ata1: found unknown device (class 0)
Jan 23 17:54:48 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 23 17:54:48 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 23 17:54:48 localhost systemd-udevd[477]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 17:54:48 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 23 17:54:48 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 23 17:54:48 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 23 17:54:48 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 23 17:54:48 localhost systemd[1]: Finished dracut initqueue hook.
Jan 23 17:54:48 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 23 17:54:48 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 23 17:54:48 localhost systemd[1]: Reached target Remote File Systems.
Jan 23 17:54:48 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 23 17:54:48 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 23 17:54:48 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 23 17:54:48 localhost systemd-fsck[554]: /usr/sbin/fsck.xfs: XFS file system.
Jan 23 17:54:48 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 23 17:54:48 localhost systemd[1]: Mounting /sysroot...
Jan 23 17:54:49 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 23 17:54:49 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 23 17:54:49 localhost kernel: XFS (vda1): Ending clean mount
Jan 23 17:54:49 localhost systemd[1]: Mounted /sysroot.
Jan 23 17:54:49 localhost systemd[1]: Reached target Initrd Root File System.
Jan 23 17:54:49 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 23 17:54:49 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 23 17:54:49 localhost systemd[1]: Reached target Initrd File Systems.
Jan 23 17:54:49 localhost systemd[1]: Reached target Initrd Default Target.
Jan 23 17:54:49 localhost systemd[1]: Starting dracut mount hook...
Jan 23 17:54:49 localhost systemd[1]: Finished dracut mount hook.
Jan 23 17:54:49 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 23 17:54:49 localhost rpc.idmapd[447]: exiting on signal 15
Jan 23 17:54:49 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 23 17:54:49 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 23 17:54:49 localhost systemd[1]: Stopped target Network.
Jan 23 17:54:49 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 23 17:54:49 localhost systemd[1]: Stopped target Timer Units.
Jan 23 17:54:49 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 23 17:54:49 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 23 17:54:49 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 23 17:54:49 localhost systemd[1]: Stopped target Basic System.
Jan 23 17:54:49 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 23 17:54:49 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 23 17:54:49 localhost systemd[1]: Stopped target Path Units.
Jan 23 17:54:49 localhost systemd[1]: Stopped target Remote File Systems.
Jan 23 17:54:49 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 23 17:54:49 localhost systemd[1]: Stopped target Slice Units.
Jan 23 17:54:49 localhost systemd[1]: Stopped target Socket Units.
Jan 23 17:54:49 localhost systemd[1]: Stopped target System Initialization.
Jan 23 17:54:49 localhost systemd[1]: Stopped target Local File Systems.
Jan 23 17:54:49 localhost systemd[1]: Stopped target Swaps.
Jan 23 17:54:49 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Stopped dracut mount hook.
Jan 23 17:54:49 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 23 17:54:49 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 23 17:54:49 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 23 17:54:49 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 23 17:54:49 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 23 17:54:49 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 23 17:54:49 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 23 17:54:49 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 23 17:54:49 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 23 17:54:49 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 23 17:54:49 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 23 17:54:49 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 23 17:54:49 localhost systemd[1]: systemd-udevd.service: Consumed 1.041s CPU time.
Jan 23 17:54:49 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Closed udev Control Socket.
Jan 23 17:54:49 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Closed udev Kernel Socket.
Jan 23 17:54:49 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 23 17:54:49 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 23 17:54:49 localhost systemd[1]: Starting Cleanup udev Database...
Jan 23 17:54:49 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 23 17:54:49 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 23 17:54:49 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Stopped Create System Users.
Jan 23 17:54:49 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 23 17:54:49 localhost systemd[1]: Finished Cleanup udev Database.
Jan 23 17:54:49 localhost systemd[1]: Reached target Switch Root.
Jan 23 17:54:49 localhost systemd[1]: Starting Switch Root...
Jan 23 17:54:49 localhost systemd[1]: Switching root.
Jan 23 17:54:49 localhost systemd-journald[306]: Journal stopped
Jan 23 17:54:50 localhost systemd-journald[306]: Received SIGTERM from PID 1 (systemd).
Jan 23 17:54:50 localhost kernel: audit: type=1404 audit(1769190889.473:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 23 17:54:50 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 17:54:50 localhost kernel: SELinux:  policy capability open_perms=1
Jan 23 17:54:50 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 17:54:50 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 23 17:54:50 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 17:54:50 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 17:54:50 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 17:54:50 localhost kernel: audit: type=1403 audit(1769190889.622:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 23 17:54:50 localhost systemd[1]: Successfully loaded SELinux policy in 154.929ms.
Jan 23 17:54:50 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 35.017ms.
Jan 23 17:54:50 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 23 17:54:50 localhost systemd[1]: Detected virtualization kvm.
Jan 23 17:54:50 localhost systemd[1]: Detected architecture x86-64.
Jan 23 17:54:50 localhost systemd-rc-local-generator[638]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 17:54:50 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 23 17:54:50 localhost systemd[1]: Stopped Switch Root.
Jan 23 17:54:50 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 23 17:54:50 localhost systemd[1]: Created slice Slice /system/getty.
Jan 23 17:54:50 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 23 17:54:50 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 23 17:54:50 localhost systemd[1]: Created slice User and Session Slice.
Jan 23 17:54:50 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 23 17:54:50 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 23 17:54:50 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 23 17:54:50 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 23 17:54:50 localhost systemd[1]: Stopped target Switch Root.
Jan 23 17:54:50 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 23 17:54:50 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 23 17:54:50 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 23 17:54:50 localhost systemd[1]: Reached target Path Units.
Jan 23 17:54:50 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 23 17:54:50 localhost systemd[1]: Reached target Slice Units.
Jan 23 17:54:50 localhost systemd[1]: Reached target Swaps.
Jan 23 17:54:50 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 23 17:54:50 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 23 17:54:50 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 23 17:54:50 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 23 17:54:50 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 23 17:54:50 localhost systemd[1]: Listening on udev Control Socket.
Jan 23 17:54:50 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 23 17:54:50 localhost systemd[1]: Mounting Huge Pages File System...
Jan 23 17:54:50 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 23 17:54:50 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 23 17:54:50 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 23 17:54:50 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 23 17:54:50 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 23 17:54:50 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 23 17:54:50 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 23 17:54:50 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 23 17:54:50 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 23 17:54:50 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 23 17:54:50 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 23 17:54:50 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 23 17:54:50 localhost systemd[1]: Stopped Journal Service.
Jan 23 17:54:50 localhost kernel: fuse: init (API version 7.37)
Jan 23 17:54:50 localhost systemd[1]: Starting Journal Service...
Jan 23 17:54:50 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 23 17:54:50 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 23 17:54:50 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 23 17:54:50 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 23 17:54:50 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 23 17:54:50 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 23 17:54:50 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 23 17:54:50 localhost systemd-journald[679]: Journal started
Jan 23 17:54:50 localhost systemd-journald[679]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 23 17:54:49 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 23 17:54:49 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 23 17:54:50 localhost systemd[1]: Started Journal Service.
Jan 23 17:54:50 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 23 17:54:50 localhost systemd[1]: Mounted Huge Pages File System.
Jan 23 17:54:50 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 23 17:54:50 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 23 17:54:50 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 23 17:54:50 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 23 17:54:50 localhost kernel: ACPI: bus type drm_connector registered
Jan 23 17:54:50 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 23 17:54:50 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 23 17:54:50 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 23 17:54:50 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 23 17:54:50 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 23 17:54:50 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 23 17:54:50 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 23 17:54:50 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 23 17:54:50 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 23 17:54:50 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 23 17:54:50 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 23 17:54:50 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 23 17:54:50 localhost systemd[1]: Mounting FUSE Control File System...
Jan 23 17:54:50 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 23 17:54:50 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 23 17:54:50 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 23 17:54:50 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 23 17:54:50 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 23 17:54:50 localhost systemd[1]: Starting Create System Users...
Jan 23 17:54:50 localhost systemd[1]: Mounted FUSE Control File System.
Jan 23 17:54:50 localhost systemd-journald[679]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 23 17:54:50 localhost systemd-journald[679]: Received client request to flush runtime journal.
Jan 23 17:54:50 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 23 17:54:50 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 23 17:54:50 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 23 17:54:50 localhost systemd[1]: Finished Create System Users.
Jan 23 17:54:50 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 23 17:54:50 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 23 17:54:50 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 23 17:54:50 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 23 17:54:50 localhost systemd[1]: Reached target Local File Systems.
Jan 23 17:54:50 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 23 17:54:50 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 23 17:54:50 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 23 17:54:50 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 23 17:54:50 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 23 17:54:50 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 23 17:54:50 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 23 17:54:50 localhost bootctl[699]: Couldn't find EFI system partition, skipping.
Jan 23 17:54:50 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 23 17:54:50 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 23 17:54:50 localhost systemd[1]: Starting Security Auditing Service...
Jan 23 17:54:50 localhost systemd[1]: Starting RPC Bind...
Jan 23 17:54:50 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 23 17:54:50 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 23 17:54:50 localhost auditd[706]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 23 17:54:50 localhost auditd[706]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 23 17:54:50 localhost systemd[1]: Started RPC Bind.
Jan 23 17:54:50 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 23 17:54:50 localhost augenrules[711]: /sbin/augenrules: No change
Jan 23 17:54:50 localhost augenrules[726]: No rules
Jan 23 17:54:50 localhost augenrules[726]: enabled 1
Jan 23 17:54:50 localhost augenrules[726]: failure 1
Jan 23 17:54:50 localhost augenrules[726]: pid 706
Jan 23 17:54:50 localhost augenrules[726]: rate_limit 0
Jan 23 17:54:50 localhost augenrules[726]: backlog_limit 8192
Jan 23 17:54:50 localhost augenrules[726]: lost 0
Jan 23 17:54:50 localhost augenrules[726]: backlog 4
Jan 23 17:54:50 localhost augenrules[726]: backlog_wait_time 60000
Jan 23 17:54:50 localhost augenrules[726]: backlog_wait_time_actual 0
Jan 23 17:54:50 localhost augenrules[726]: enabled 1
Jan 23 17:54:50 localhost augenrules[726]: failure 1
Jan 23 17:54:50 localhost augenrules[726]: pid 706
Jan 23 17:54:50 localhost augenrules[726]: rate_limit 0
Jan 23 17:54:50 localhost augenrules[726]: backlog_limit 8192
Jan 23 17:54:50 localhost augenrules[726]: lost 0
Jan 23 17:54:50 localhost augenrules[726]: backlog 4
Jan 23 17:54:50 localhost augenrules[726]: backlog_wait_time 60000
Jan 23 17:54:50 localhost augenrules[726]: backlog_wait_time_actual 0
Jan 23 17:54:50 localhost augenrules[726]: enabled 1
Jan 23 17:54:50 localhost augenrules[726]: failure 1
Jan 23 17:54:50 localhost augenrules[726]: pid 706
Jan 23 17:54:50 localhost augenrules[726]: rate_limit 0
Jan 23 17:54:50 localhost augenrules[726]: backlog_limit 8192
Jan 23 17:54:50 localhost augenrules[726]: lost 0
Jan 23 17:54:50 localhost augenrules[726]: backlog 4
Jan 23 17:54:50 localhost augenrules[726]: backlog_wait_time 60000
Jan 23 17:54:50 localhost augenrules[726]: backlog_wait_time_actual 0
Jan 23 17:54:50 localhost systemd[1]: Started Security Auditing Service.
Jan 23 17:54:50 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 23 17:54:50 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 23 17:54:51 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 23 17:54:51 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 23 17:54:51 localhost systemd[1]: Starting Update is Completed...
Jan 23 17:54:51 localhost systemd[1]: Finished Update is Completed.
Jan 23 17:54:51 localhost systemd-udevd[734]: Using default interface naming scheme 'rhel-9.0'.
Jan 23 17:54:51 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 23 17:54:51 localhost systemd[1]: Reached target System Initialization.
Jan 23 17:54:51 localhost systemd[1]: Started dnf makecache --timer.
Jan 23 17:54:51 localhost systemd[1]: Started Daily rotation of log files.
Jan 23 17:54:51 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 23 17:54:51 localhost systemd[1]: Reached target Timer Units.
Jan 23 17:54:51 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 23 17:54:51 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 23 17:54:51 localhost systemd[1]: Reached target Socket Units.
Jan 23 17:54:51 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 23 17:54:51 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 23 17:54:51 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 23 17:54:51 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 23 17:54:51 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 23 17:54:51 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 23 17:54:51 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 23 17:54:51 localhost systemd[1]: Reached target Basic System.
Jan 23 17:54:51 localhost dbus-broker-lau[756]: Ready
Jan 23 17:54:51 localhost systemd-udevd[743]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 17:54:51 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 23 17:54:51 localhost systemd[1]: Starting NTP client/server...
Jan 23 17:54:51 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 23 17:54:51 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 23 17:54:51 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 23 17:54:51 localhost systemd[1]: Started irqbalance daemon.
Jan 23 17:54:51 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 23 17:54:51 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 17:54:51 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 17:54:51 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 17:54:51 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 23 17:54:51 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 23 17:54:51 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 23 17:54:51 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 23 17:54:51 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 23 17:54:51 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 23 17:54:51 localhost chronyd[794]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 23 17:54:51 localhost systemd[1]: Starting User Login Management...
Jan 23 17:54:51 localhost chronyd[794]: Loaded 0 symmetric keys
Jan 23 17:54:51 localhost systemd[1]: Started NTP client/server.
Jan 23 17:54:51 localhost chronyd[794]: Using right/UTC timezone to obtain leap second data
Jan 23 17:54:51 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 23 17:54:51 localhost chronyd[794]: Loaded seccomp filter (level 2)
Jan 23 17:54:51 localhost systemd-logind[792]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 23 17:54:51 localhost systemd-logind[792]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 23 17:54:51 localhost systemd-logind[792]: New seat seat0.
Jan 23 17:54:51 localhost systemd[1]: Started User Login Management.
Jan 23 17:54:51 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 23 17:54:51 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 23 17:54:51 localhost kernel: kvm_amd: TSC scaling supported
Jan 23 17:54:51 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 23 17:54:51 localhost kernel: kvm_amd: Nested Paging enabled
Jan 23 17:54:51 localhost kernel: kvm_amd: LBR virtualization supported
Jan 23 17:54:51 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 23 17:54:51 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 23 17:54:51 localhost kernel: Console: switching to colour dummy device 80x25
Jan 23 17:54:51 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 23 17:54:51 localhost kernel: [drm] features: -context_init
Jan 23 17:54:51 localhost kernel: [drm] number of scanouts: 1
Jan 23 17:54:51 localhost kernel: [drm] number of cap sets: 0
Jan 23 17:54:51 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 23 17:54:51 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 23 17:54:51 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 23 17:54:51 localhost iptables.init[784]: iptables: Applying firewall rules: [  OK  ]
Jan 23 17:54:51 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 23 17:54:51 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 23 17:54:51 localhost cloud-init[843]: Cloud-init v. 24.4-8.el9 running 'init-local' at Fri, 23 Jan 2026 17:54:51 +0000. Up 7.98 seconds.
Jan 23 17:54:52 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 23 17:54:52 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 23 17:54:52 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpqw_3cug1.mount: Deactivated successfully.
Jan 23 17:54:52 localhost systemd[1]: Starting Hostname Service...
Jan 23 17:54:52 localhost systemd[1]: Started Hostname Service.
Jan 23 17:54:52 np0005593912.novalocal systemd-hostnamed[857]: Hostname set to <np0005593912.novalocal> (static)
Jan 23 17:54:52 np0005593912.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 23 17:54:52 np0005593912.novalocal systemd[1]: Reached target Preparation for Network.
Jan 23 17:54:52 np0005593912.novalocal systemd[1]: Starting Network Manager...
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5494] NetworkManager (version 1.54.3-2.el9) is starting... (boot:6f3b892c-d29e-4c61-8f43-845fc2f2e50a)
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5501] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5586] manager[0x561345165000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5642] hostname: hostname: using hostnamed
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5643] hostname: static hostname changed from (none) to "np0005593912.novalocal"
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5648] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5766] manager[0x561345165000]: rfkill: Wi-Fi hardware radio set enabled
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5767] manager[0x561345165000]: rfkill: WWAN hardware radio set enabled
Jan 23 17:54:52 np0005593912.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5810] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5812] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5813] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5814] manager: Networking is enabled by state file
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5816] settings: Loaded settings plugin: keyfile (internal)
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5839] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5859] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5869] dhcp: init: Using DHCP client 'internal'
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5872] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5887] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5893] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5903] device (lo): Activation: starting connection 'lo' (8f9a105d-dccb-4e29-a947-e22575a1eaec)
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5914] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5918] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5970] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 23 17:54:52 np0005593912.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5974] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5976] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5977] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5979] device (eth0): carrier: link connected
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5981] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5987] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5994] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 23 17:54:52 np0005593912.novalocal systemd[1]: Started Network Manager.
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.5999] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.6000] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.6002] manager: NetworkManager state is now CONNECTING
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.6003] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.6012] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 17:54:52 np0005593912.novalocal systemd[1]: Reached target Network.
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.6016] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.6054] dhcp4 (eth0): state changed new lease, address=38.129.56.100
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.6063] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.6082] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 17:54:52 np0005593912.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 23 17:54:52 np0005593912.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.6172] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.6174] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.6177] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 23 17:54:52 np0005593912.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.6182] device (lo): Activation: successful, device activated.
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.6187] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.6190] manager: NetworkManager state is now CONNECTED_SITE
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.6192] device (eth0): Activation: successful, device activated.
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.6197] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 23 17:54:52 np0005593912.novalocal NetworkManager[861]: <info>  [1769190892.6199] manager: startup complete
Jan 23 17:54:52 np0005593912.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 23 17:54:52 np0005593912.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 23 17:54:52 np0005593912.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 23 17:54:52 np0005593912.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 23 17:54:52 np0005593912.novalocal systemd[1]: Reached target NFS client services.
Jan 23 17:54:52 np0005593912.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 23 17:54:52 np0005593912.novalocal systemd[1]: Reached target Remote File Systems.
Jan 23 17:54:52 np0005593912.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: Cloud-init v. 24.4-8.el9 running 'init' at Fri, 23 Jan 2026 17:54:53 +0000. Up 9.09 seconds.
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: |  eth0  | True |        38.129.56.100         | 255.255.255.0 | global | fa:16:3e:9d:af:c7 |
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fe9d:afc7/64 |       .       |  link  | fa:16:3e:9d:af:c7 |
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 23 17:54:53 np0005593912.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 23 17:54:53 np0005593912.novalocal useradd[990]: new group: name=cloud-user, GID=1001
Jan 23 17:54:53 np0005593912.novalocal useradd[990]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 23 17:54:53 np0005593912.novalocal useradd[990]: add 'cloud-user' to group 'adm'
Jan 23 17:54:53 np0005593912.novalocal useradd[990]: add 'cloud-user' to group 'systemd-journal'
Jan 23 17:54:53 np0005593912.novalocal useradd[990]: add 'cloud-user' to shadow group 'adm'
Jan 23 17:54:53 np0005593912.novalocal useradd[990]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: Generating public/private rsa key pair.
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: The key fingerprint is:
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: SHA256:azQk8dDLkN/tMA3bbGFCmqYl8myc4SY8xoaGIE8VO7M root@np0005593912.novalocal
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: The key's randomart image is:
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: +---[RSA 3072]----+
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |    o.oo ..      |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |   . .o+.oo o    |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |o . = ++Bo X .   |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |o+ + X X+ = *    |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |. + E X S  =     |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: | . o = . o  .    |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |        o        |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |       .         |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |                 |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: +----[SHA256]-----+
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: Generating public/private ecdsa key pair.
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: The key fingerprint is:
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: SHA256:iG0938JHYJ8CiD10pV0bEhtlpJobzBODjqAdVmEnB8k root@np0005593912.novalocal
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: The key's randomart image is:
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: +---[ECDSA 256]---+
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |   .*++ ..=+*    |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |   oEB + o B o   |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |  + . = = * .    |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: | + o = * B o .   |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |. . o = S . +    |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |     .   B +     |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |        . + o    |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |           o     |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |                 |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: +----[SHA256]-----+
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: Generating public/private ed25519 key pair.
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: The key fingerprint is:
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: SHA256:qkFMST52X7oU5Uue7lcPJVH/6J7Ev7d0vJAjs9HC7U0 root@np0005593912.novalocal
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: The key's randomart image is:
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: +--[ED25519 256]--+
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |    .     .    ..|
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |   o .   o    . .|
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |    * . . +    ..|
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |   + o . * o  ..o|
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |    o   S +   .o.|
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |   .   o o. oo+. |
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |    . . . .* B+Eo|
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |     o   .  Oo*o=|
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: |    .     .o .o==|
Jan 23 17:54:54 np0005593912.novalocal cloud-init[924]: +----[SHA256]-----+
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Reached target Network is Online.
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Starting System Logging Service...
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 23 17:54:54 np0005593912.novalocal sm-notify[1006]: Version 2.5.4 starting
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Starting Permit User Sessions...
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 23 17:54:54 np0005593912.novalocal sshd[1008]: Server listening on 0.0.0.0 port 22.
Jan 23 17:54:54 np0005593912.novalocal sshd[1008]: Server listening on :: port 22.
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Finished Permit User Sessions.
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Started Command Scheduler.
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Started Getty on tty1.
Jan 23 17:54:54 np0005593912.novalocal crond[1011]: (CRON) STARTUP (1.5.7)
Jan 23 17:54:54 np0005593912.novalocal crond[1011]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 23 17:54:54 np0005593912.novalocal crond[1011]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 12% if used.)
Jan 23 17:54:54 np0005593912.novalocal crond[1011]: (CRON) INFO (running with inotify support)
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Reached target Login Prompts.
Jan 23 17:54:54 np0005593912.novalocal rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Jan 23 17:54:54 np0005593912.novalocal rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Started System Logging Service.
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Reached target Multi-User System.
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 23 17:54:54 np0005593912.novalocal rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 17:54:54 np0005593912.novalocal kdumpctl[1016]: kdump: No kdump initial ramdisk found.
Jan 23 17:54:54 np0005593912.novalocal kdumpctl[1016]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 23 17:54:54 np0005593912.novalocal cloud-init[1132]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Fri, 23 Jan 2026 17:54:54 +0000. Up 10.69 seconds.
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 23 17:54:54 np0005593912.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 23 17:54:55 np0005593912.novalocal dracut[1267]: dracut-057-102.git20250818.el9
Jan 23 17:54:55 np0005593912.novalocal cloud-init[1285]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Fri, 23 Jan 2026 17:54:55 +0000. Up 11.10 seconds.
Jan 23 17:54:55 np0005593912.novalocal cloud-init[1287]: #############################################################
Jan 23 17:54:55 np0005593912.novalocal cloud-init[1288]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 23 17:54:55 np0005593912.novalocal cloud-init[1290]: 256 SHA256:iG0938JHYJ8CiD10pV0bEhtlpJobzBODjqAdVmEnB8k root@np0005593912.novalocal (ECDSA)
Jan 23 17:54:55 np0005593912.novalocal cloud-init[1292]: 256 SHA256:qkFMST52X7oU5Uue7lcPJVH/6J7Ev7d0vJAjs9HC7U0 root@np0005593912.novalocal (ED25519)
Jan 23 17:54:55 np0005593912.novalocal cloud-init[1296]: 3072 SHA256:azQk8dDLkN/tMA3bbGFCmqYl8myc4SY8xoaGIE8VO7M root@np0005593912.novalocal (RSA)
Jan 23 17:54:55 np0005593912.novalocal cloud-init[1298]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 23 17:54:55 np0005593912.novalocal cloud-init[1302]: #############################################################
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 23 17:54:55 np0005593912.novalocal cloud-init[1285]: Cloud-init v. 24.4-8.el9 finished at Fri, 23 Jan 2026 17:54:55 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.28 seconds
Jan 23 17:54:55 np0005593912.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 23 17:54:55 np0005593912.novalocal systemd[1]: Reached target Cloud-init target.
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 23 17:54:55 np0005593912.novalocal sshd-session[1482]: Unable to negotiate with 38.102.83.114 port 42940: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 23 17:54:55 np0005593912.novalocal sshd-session[1506]: Unable to negotiate with 38.102.83.114 port 42954: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 23 17:54:55 np0005593912.novalocal sshd-session[1517]: Unable to negotiate with 38.102.83.114 port 42966: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 23 17:54:55 np0005593912.novalocal sshd-session[1468]: Connection closed by 38.102.83.114 port 42926 [preauth]
Jan 23 17:54:55 np0005593912.novalocal sshd-session[1550]: Unable to negotiate with 38.102.83.114 port 42986: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 23 17:54:55 np0005593912.novalocal sshd-session[1495]: Connection closed by 38.102.83.114 port 42944 [preauth]
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 23 17:54:55 np0005593912.novalocal sshd-session[1556]: Unable to negotiate with 38.102.83.114 port 43002: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 23 17:54:55 np0005593912.novalocal dracut[1271]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 23 17:54:55 np0005593912.novalocal sshd-session[1526]: Connection closed by 38.102.83.114 port 42976 [preauth]
Jan 23 17:54:56 np0005593912.novalocal sshd-session[1540]: Connection closed by 38.102.83.114 port 42982 [preauth]
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: memstrack is not available
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: memstrack is not available
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 23 17:54:56 np0005593912.novalocal dracut[1271]: *** Including module: systemd ***
Jan 23 17:54:57 np0005593912.novalocal dracut[1271]: *** Including module: fips ***
Jan 23 17:54:57 np0005593912.novalocal chronyd[794]: Selected source 131.153.171.250 (2.centos.pool.ntp.org)
Jan 23 17:54:57 np0005593912.novalocal chronyd[794]: System clock TAI offset set to 37 seconds
Jan 23 17:54:57 np0005593912.novalocal dracut[1271]: *** Including module: systemd-initrd ***
Jan 23 17:54:57 np0005593912.novalocal dracut[1271]: *** Including module: i18n ***
Jan 23 17:54:57 np0005593912.novalocal dracut[1271]: *** Including module: drm ***
Jan 23 17:54:58 np0005593912.novalocal dracut[1271]: *** Including module: prefixdevname ***
Jan 23 17:54:58 np0005593912.novalocal dracut[1271]: *** Including module: kernel-modules ***
Jan 23 17:54:58 np0005593912.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 23 17:54:58 np0005593912.novalocal dracut[1271]: *** Including module: kernel-modules-extra ***
Jan 23 17:54:58 np0005593912.novalocal dracut[1271]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 23 17:54:58 np0005593912.novalocal dracut[1271]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 23 17:54:58 np0005593912.novalocal dracut[1271]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 23 17:54:58 np0005593912.novalocal dracut[1271]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 23 17:54:58 np0005593912.novalocal dracut[1271]: *** Including module: qemu ***
Jan 23 17:54:58 np0005593912.novalocal dracut[1271]: *** Including module: fstab-sys ***
Jan 23 17:54:58 np0005593912.novalocal dracut[1271]: *** Including module: rootfs-block ***
Jan 23 17:54:58 np0005593912.novalocal dracut[1271]: *** Including module: terminfo ***
Jan 23 17:54:58 np0005593912.novalocal dracut[1271]: *** Including module: udev-rules ***
Jan 23 17:54:59 np0005593912.novalocal dracut[1271]: Skipping udev rule: 91-permissions.rules
Jan 23 17:54:59 np0005593912.novalocal dracut[1271]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 23 17:54:59 np0005593912.novalocal dracut[1271]: *** Including module: virtiofs ***
Jan 23 17:54:59 np0005593912.novalocal dracut[1271]: *** Including module: dracut-systemd ***
Jan 23 17:54:59 np0005593912.novalocal dracut[1271]: *** Including module: usrmount ***
Jan 23 17:54:59 np0005593912.novalocal dracut[1271]: *** Including module: base ***
Jan 23 17:54:59 np0005593912.novalocal dracut[1271]: *** Including module: fs-lib ***
Jan 23 17:54:59 np0005593912.novalocal dracut[1271]: *** Including module: kdumpbase ***
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:   microcode_ctl module: mangling fw_dir
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: configuration "intel" is ignored
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]: *** Including module: openssl ***
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]: *** Including module: shutdown ***
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]: *** Including module: squash ***
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]: *** Including modules done ***
Jan 23 17:55:00 np0005593912.novalocal dracut[1271]: *** Installing kernel module dependencies ***
Jan 23 17:55:01 np0005593912.novalocal dracut[1271]: *** Installing kernel module dependencies done ***
Jan 23 17:55:01 np0005593912.novalocal dracut[1271]: *** Resolving executable dependencies ***
Jan 23 17:55:01 np0005593912.novalocal irqbalance[786]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 23 17:55:01 np0005593912.novalocal irqbalance[786]: IRQ 25 affinity is now unmanaged
Jan 23 17:55:01 np0005593912.novalocal irqbalance[786]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 23 17:55:01 np0005593912.novalocal irqbalance[786]: IRQ 31 affinity is now unmanaged
Jan 23 17:55:01 np0005593912.novalocal irqbalance[786]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 23 17:55:01 np0005593912.novalocal irqbalance[786]: IRQ 28 affinity is now unmanaged
Jan 23 17:55:01 np0005593912.novalocal irqbalance[786]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 23 17:55:01 np0005593912.novalocal irqbalance[786]: IRQ 32 affinity is now unmanaged
Jan 23 17:55:01 np0005593912.novalocal irqbalance[786]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 23 17:55:01 np0005593912.novalocal irqbalance[786]: IRQ 30 affinity is now unmanaged
Jan 23 17:55:01 np0005593912.novalocal irqbalance[786]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 23 17:55:01 np0005593912.novalocal irqbalance[786]: IRQ 29 affinity is now unmanaged
Jan 23 17:55:02 np0005593912.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 17:55:02 np0005593912.novalocal dracut[1271]: *** Resolving executable dependencies done ***
Jan 23 17:55:02 np0005593912.novalocal dracut[1271]: *** Generating early-microcode cpio image ***
Jan 23 17:55:02 np0005593912.novalocal dracut[1271]: *** Store current command line parameters ***
Jan 23 17:55:02 np0005593912.novalocal dracut[1271]: Stored kernel commandline:
Jan 23 17:55:02 np0005593912.novalocal dracut[1271]: No dracut internal kernel commandline stored in the initramfs
Jan 23 17:55:03 np0005593912.novalocal dracut[1271]: *** Install squash loader ***
Jan 23 17:55:03 np0005593912.novalocal dracut[1271]: *** Squashing the files inside the initramfs ***
Jan 23 17:55:04 np0005593912.novalocal dracut[1271]: *** Squashing the files inside the initramfs done ***
Jan 23 17:55:04 np0005593912.novalocal dracut[1271]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 23 17:55:05 np0005593912.novalocal dracut[1271]: *** Hardlinking files ***
Jan 23 17:55:05 np0005593912.novalocal dracut[1271]: Mode:           real
Jan 23 17:55:05 np0005593912.novalocal dracut[1271]: Files:          50
Jan 23 17:55:05 np0005593912.novalocal dracut[1271]: Linked:         0 files
Jan 23 17:55:05 np0005593912.novalocal dracut[1271]: Compared:       0 xattrs
Jan 23 17:55:05 np0005593912.novalocal dracut[1271]: Compared:       0 files
Jan 23 17:55:05 np0005593912.novalocal dracut[1271]: Saved:          0 B
Jan 23 17:55:05 np0005593912.novalocal dracut[1271]: Duration:       0.000545 seconds
Jan 23 17:55:05 np0005593912.novalocal dracut[1271]: *** Hardlinking files done ***
Jan 23 17:55:05 np0005593912.novalocal dracut[1271]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 23 17:55:06 np0005593912.novalocal kdumpctl[1016]: kdump: kexec: loaded kdump kernel
Jan 23 17:55:06 np0005593912.novalocal kdumpctl[1016]: kdump: Starting kdump: [OK]
Jan 23 17:55:06 np0005593912.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 23 17:55:06 np0005593912.novalocal systemd[1]: Startup finished in 2.984s (kernel) + 2.524s (initrd) + 16.733s (userspace) = 22.242s.
Jan 23 17:55:22 np0005593912.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 17:56:03 np0005593912.novalocal chronyd[794]: Selected source 173.206.123.141 (2.centos.pool.ntp.org)
Jan 23 17:56:57 np0005593912.novalocal sshd-session[4305]: Accepted publickey for zuul from 38.102.83.114 port 39936 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 23 17:56:57 np0005593912.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 23 17:56:57 np0005593912.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 23 17:56:57 np0005593912.novalocal systemd-logind[792]: New session 1 of user zuul.
Jan 23 17:56:57 np0005593912.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 23 17:56:57 np0005593912.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 23 17:56:57 np0005593912.novalocal systemd[4309]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 17:56:58 np0005593912.novalocal systemd[4309]: Queued start job for default target Main User Target.
Jan 23 17:56:58 np0005593912.novalocal systemd[4309]: Created slice User Application Slice.
Jan 23 17:56:58 np0005593912.novalocal systemd[4309]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 23 17:56:58 np0005593912.novalocal systemd[4309]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 17:56:58 np0005593912.novalocal systemd[4309]: Reached target Paths.
Jan 23 17:56:58 np0005593912.novalocal systemd[4309]: Reached target Timers.
Jan 23 17:56:58 np0005593912.novalocal systemd[4309]: Starting D-Bus User Message Bus Socket...
Jan 23 17:56:58 np0005593912.novalocal systemd[4309]: Starting Create User's Volatile Files and Directories...
Jan 23 17:56:58 np0005593912.novalocal systemd[4309]: Finished Create User's Volatile Files and Directories.
Jan 23 17:56:58 np0005593912.novalocal systemd[4309]: Listening on D-Bus User Message Bus Socket.
Jan 23 17:56:58 np0005593912.novalocal systemd[4309]: Reached target Sockets.
Jan 23 17:56:58 np0005593912.novalocal systemd[4309]: Reached target Basic System.
Jan 23 17:56:58 np0005593912.novalocal systemd[4309]: Reached target Main User Target.
Jan 23 17:56:58 np0005593912.novalocal systemd[4309]: Startup finished in 119ms.
Jan 23 17:56:58 np0005593912.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 23 17:56:58 np0005593912.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 23 17:56:58 np0005593912.novalocal sshd-session[4305]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 17:56:58 np0005593912.novalocal python3[4391]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 17:57:00 np0005593912.novalocal python3[4419]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 17:57:06 np0005593912.novalocal python3[4477]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 17:57:07 np0005593912.novalocal python3[4517]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 23 17:57:09 np0005593912.novalocal python3[4543]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFJcMRIsK+0uYz+tLbFSr31DSo8gBxF9vL7NznRnrRlZBzcq7ZbInQshMvvyIWYj8tcwE3Vxd9bGYn+CrD5gflgoCUPkVTCNKNOUF+DE7mhjTzUmqfMqS+Mlnyf3nNaisSSVB4RGH88JBApd49UPaKu8rRQjdadgI7saY3lMxtuJaUsyZlRh/tRYfRKguEtA5SENl2fghvOmOkPQ4DlslMZHZLRh3sauFPZBq4PLxs3P2CZBg67FEo9EqtM9j4ef7d/toBMZCr40gQcd5IAXnifubIFp912cY1mlSHtWl+CPKRGNScAdDvYtkgjEcpbWpq76dQmc/k/aaQKPU9uwSksDI3l9w89OtIwU/0YnYXNRTfTbaaBsO8WQ0P0yeE79zjU0PPAUIsnz8rQcwXSb5p8GDIHiEO4JIfrUbYtOG7zsU+O1J1VvFF3frpS0zmAXQEAhJGxF+7uBUe4Fn1SoGIc7Zo674tLhJkN61uj3fJEB5waixzG3kuDPym0st/G40= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:09 np0005593912.novalocal python3[4567]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:57:09 np0005593912.novalocal python3[4666]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 17:57:10 np0005593912.novalocal python3[4737]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769191029.6263995-207-105842284691712/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=b3bbde66f75a4e54ab424ff6fbdca461_id_rsa follow=False checksum=ebfe71ea7e0f9aaecab5409059377b76d07610cd backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:57:10 np0005593912.novalocal python3[4860]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 17:57:11 np0005593912.novalocal python3[4931]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769191030.567596-240-73797116140327/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=b3bbde66f75a4e54ab424ff6fbdca461_id_rsa.pub follow=False checksum=a80a541c53ef42ee0c3f95542af1d48dc9bfe084 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:57:12 np0005593912.novalocal python3[4979]: ansible-ping Invoked with data=pong
Jan 23 17:57:13 np0005593912.novalocal python3[5003]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 17:57:15 np0005593912.novalocal python3[5061]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 23 17:57:16 np0005593912.novalocal python3[5093]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:57:16 np0005593912.novalocal python3[5117]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:57:16 np0005593912.novalocal python3[5141]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:57:17 np0005593912.novalocal python3[5165]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:57:17 np0005593912.novalocal python3[5189]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:57:17 np0005593912.novalocal python3[5213]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:57:19 np0005593912.novalocal sudo[5237]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjsilqoxwgqjrdjexttdyzpykaoxeepv ; /usr/bin/python3'
Jan 23 17:57:19 np0005593912.novalocal sudo[5237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 17:57:19 np0005593912.novalocal python3[5239]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:57:19 np0005593912.novalocal sudo[5237]: pam_unix(sudo:session): session closed for user root
Jan 23 17:57:19 np0005593912.novalocal sudo[5315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxynnojynfzjupirrbitfoayamqnekvl ; /usr/bin/python3'
Jan 23 17:57:19 np0005593912.novalocal sudo[5315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 17:57:19 np0005593912.novalocal python3[5317]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 17:57:19 np0005593912.novalocal sudo[5315]: pam_unix(sudo:session): session closed for user root
Jan 23 17:57:20 np0005593912.novalocal sudo[5388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prqaswvfybgssunpsmzwlexuovhfplqq ; /usr/bin/python3'
Jan 23 17:57:20 np0005593912.novalocal sudo[5388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 17:57:20 np0005593912.novalocal python3[5390]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769191039.4532413-21-111224497821801/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:57:20 np0005593912.novalocal sudo[5388]: pam_unix(sudo:session): session closed for user root
Jan 23 17:57:20 np0005593912.novalocal python3[5438]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:21 np0005593912.novalocal python3[5462]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:21 np0005593912.novalocal python3[5486]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:21 np0005593912.novalocal python3[5510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:21 np0005593912.novalocal python3[5534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:22 np0005593912.novalocal python3[5558]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:22 np0005593912.novalocal python3[5582]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:22 np0005593912.novalocal python3[5606]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:23 np0005593912.novalocal python3[5630]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:23 np0005593912.novalocal python3[5654]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:23 np0005593912.novalocal python3[5678]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:23 np0005593912.novalocal python3[5702]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:24 np0005593912.novalocal python3[5726]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:24 np0005593912.novalocal python3[5750]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:24 np0005593912.novalocal python3[5774]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:25 np0005593912.novalocal python3[5798]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:25 np0005593912.novalocal python3[5822]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:25 np0005593912.novalocal python3[5846]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:25 np0005593912.novalocal python3[5870]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:26 np0005593912.novalocal python3[5894]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:26 np0005593912.novalocal python3[5918]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:27 np0005593912.novalocal python3[5942]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:27 np0005593912.novalocal python3[5966]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:27 np0005593912.novalocal python3[5990]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:28 np0005593912.novalocal python3[6014]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:28 np0005593912.novalocal python3[6038]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 17:57:30 np0005593912.novalocal sudo[6062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftloafrtrdgxeihrypppcybuarqsfmsb ; /usr/bin/python3'
Jan 23 17:57:30 np0005593912.novalocal sudo[6062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 17:57:30 np0005593912.novalocal python3[6064]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 23 17:57:31 np0005593912.novalocal systemd[1]: Starting Time & Date Service...
Jan 23 17:57:31 np0005593912.novalocal systemd[1]: Started Time & Date Service.
Jan 23 17:57:32 np0005593912.novalocal systemd-timedated[6066]: Changed time zone to 'UTC' (UTC).
Jan 23 17:57:32 np0005593912.novalocal sudo[6062]: pam_unix(sudo:session): session closed for user root
Jan 23 17:57:32 np0005593912.novalocal sudo[6093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgicebpseezglodlsielqegfsutiulid ; /usr/bin/python3'
Jan 23 17:57:32 np0005593912.novalocal sudo[6093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 17:57:32 np0005593912.novalocal python3[6095]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:57:32 np0005593912.novalocal sudo[6093]: pam_unix(sudo:session): session closed for user root
Jan 23 17:57:32 np0005593912.novalocal python3[6171]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 17:57:33 np0005593912.novalocal python3[6242]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769191052.6411777-153-141246630555383/source _original_basename=tmptqot5znw follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:57:33 np0005593912.novalocal python3[6342]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 17:57:34 np0005593912.novalocal python3[6413]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769191053.4446537-183-101854285369250/source _original_basename=tmp2xdjegfy follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:57:34 np0005593912.novalocal sudo[6513]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idpcpqbbvnmfbimkzieagdofkvbxcjbj ; /usr/bin/python3'
Jan 23 17:57:34 np0005593912.novalocal sudo[6513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 17:57:34 np0005593912.novalocal python3[6515]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 17:57:34 np0005593912.novalocal sudo[6513]: pam_unix(sudo:session): session closed for user root
Jan 23 17:57:34 np0005593912.novalocal sudo[6586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weoolfthyanroykzhmxgwzdwhtpgprlm ; /usr/bin/python3'
Jan 23 17:57:34 np0005593912.novalocal sudo[6586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 17:57:35 np0005593912.novalocal python3[6588]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769191054.4071646-231-113843532992979/source _original_basename=tmpe5btkltk follow=False checksum=ffd0de2cbde8f1bfa1022e98cee8601a871a4a53 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:57:35 np0005593912.novalocal sudo[6586]: pam_unix(sudo:session): session closed for user root
Jan 23 17:57:35 np0005593912.novalocal python3[6636]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 17:57:35 np0005593912.novalocal python3[6662]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 17:57:36 np0005593912.novalocal sudo[6740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnccwkflbiqanekffgeiukpoodxgjtqh ; /usr/bin/python3'
Jan 23 17:57:36 np0005593912.novalocal sudo[6740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 17:57:36 np0005593912.novalocal python3[6742]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 17:57:36 np0005593912.novalocal sudo[6740]: pam_unix(sudo:session): session closed for user root
Jan 23 17:57:36 np0005593912.novalocal sudo[6813]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibafdabokcijrhqkblqazqnpymegowbd ; /usr/bin/python3'
Jan 23 17:57:36 np0005593912.novalocal sudo[6813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 17:57:36 np0005593912.novalocal python3[6815]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769191056.0751803-273-232985472469089/source _original_basename=tmpi9mwp5rm follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:57:36 np0005593912.novalocal sudo[6813]: pam_unix(sudo:session): session closed for user root
Jan 23 17:57:37 np0005593912.novalocal sudo[6864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqzjophptbsfottjkzlvozcsvlsbeoih ; /usr/bin/python3'
Jan 23 17:57:37 np0005593912.novalocal sudo[6864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 17:57:37 np0005593912.novalocal python3[6866]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-7fb3-b2df-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 17:57:37 np0005593912.novalocal sudo[6864]: pam_unix(sudo:session): session closed for user root
Jan 23 17:57:38 np0005593912.novalocal python3[6894]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-7fb3-b2df-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 23 17:57:39 np0005593912.novalocal python3[6923]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:57:57 np0005593912.novalocal sudo[6947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yijzwmjbohlfnlhzsalqfsnccwwurrsa ; /usr/bin/python3'
Jan 23 17:57:57 np0005593912.novalocal sudo[6947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 17:57:58 np0005593912.novalocal python3[6949]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:57:58 np0005593912.novalocal sudo[6947]: pam_unix(sudo:session): session closed for user root
Jan 23 17:58:02 np0005593912.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 23 17:58:31 np0005593912.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 23 17:58:31 np0005593912.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 23 17:58:31 np0005593912.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 23 17:58:31 np0005593912.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 23 17:58:31 np0005593912.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 23 17:58:31 np0005593912.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 23 17:58:31 np0005593912.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 23 17:58:31 np0005593912.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 23 17:58:31 np0005593912.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 23 17:58:31 np0005593912.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 23 17:58:31 np0005593912.novalocal NetworkManager[861]: <info>  [1769191111.1549] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 23 17:58:31 np0005593912.novalocal systemd-udevd[6953]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 17:58:31 np0005593912.novalocal NetworkManager[861]: <info>  [1769191111.1708] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 17:58:31 np0005593912.novalocal NetworkManager[861]: <info>  [1769191111.1733] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 23 17:58:31 np0005593912.novalocal NetworkManager[861]: <info>  [1769191111.1735] device (eth1): carrier: link connected
Jan 23 17:58:31 np0005593912.novalocal NetworkManager[861]: <info>  [1769191111.1736] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 23 17:58:31 np0005593912.novalocal NetworkManager[861]: <info>  [1769191111.1741] policy: auto-activating connection 'Wired connection 1' (bc036e08-846c-34c0-9a7c-cba0a7407da5)
Jan 23 17:58:31 np0005593912.novalocal NetworkManager[861]: <info>  [1769191111.1744] device (eth1): Activation: starting connection 'Wired connection 1' (bc036e08-846c-34c0-9a7c-cba0a7407da5)
Jan 23 17:58:31 np0005593912.novalocal NetworkManager[861]: <info>  [1769191111.1745] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 17:58:31 np0005593912.novalocal NetworkManager[861]: <info>  [1769191111.1747] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 17:58:31 np0005593912.novalocal NetworkManager[861]: <info>  [1769191111.1750] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 17:58:31 np0005593912.novalocal NetworkManager[861]: <info>  [1769191111.1753] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 23 17:58:32 np0005593912.novalocal python3[6979]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-e577-0abb-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 17:58:42 np0005593912.novalocal sudo[7057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smuzfkglghamwwhsjhtdjzymbiqpbgof ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 23 17:58:42 np0005593912.novalocal sudo[7057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 17:58:42 np0005593912.novalocal python3[7059]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 17:58:42 np0005593912.novalocal sudo[7057]: pam_unix(sudo:session): session closed for user root
Jan 23 17:58:42 np0005593912.novalocal sudo[7130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnaaxvnhawoxvjbverxinzzzmqqlhfsi ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 23 17:58:42 np0005593912.novalocal sudo[7130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 17:58:42 np0005593912.novalocal python3[7132]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769191122.1288126-102-60756119502911/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=82aaaf9028bf41ec59c1184de60b10dee46c94bd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:58:42 np0005593912.novalocal sudo[7130]: pam_unix(sudo:session): session closed for user root
Jan 23 17:58:43 np0005593912.novalocal sudo[7180]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byifmvhicrljgopsivuiplpphzdlypuv ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 23 17:58:43 np0005593912.novalocal sudo[7180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 17:58:43 np0005593912.novalocal python3[7182]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 17:58:43 np0005593912.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 23 17:58:43 np0005593912.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 23 17:58:43 np0005593912.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 23 17:58:43 np0005593912.novalocal systemd[1]: Stopping Network Manager...
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[861]: <info>  [1769191123.6749] caught SIGTERM, shutting down normally.
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[861]: <info>  [1769191123.6761] dhcp4 (eth0): canceled DHCP transaction
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[861]: <info>  [1769191123.6761] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[861]: <info>  [1769191123.6761] dhcp4 (eth0): state changed no lease
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[861]: <info>  [1769191123.6764] manager: NetworkManager state is now CONNECTING
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[861]: <info>  [1769191123.6856] dhcp4 (eth1): canceled DHCP transaction
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[861]: <info>  [1769191123.6856] dhcp4 (eth1): state changed no lease
Jan 23 17:58:43 np0005593912.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[861]: <info>  [1769191123.6917] exiting (success)
Jan 23 17:58:43 np0005593912.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 17:58:43 np0005593912.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 23 17:58:43 np0005593912.novalocal systemd[1]: Stopped Network Manager.
Jan 23 17:58:43 np0005593912.novalocal systemd[1]: NetworkManager.service: Consumed 1.557s CPU time, 10.2M memory peak.
Jan 23 17:58:43 np0005593912.novalocal systemd[1]: Starting Network Manager...
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.7476] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:6f3b892c-d29e-4c61-8f43-845fc2f2e50a)
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.7481] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.7542] manager[0x55a66d0c9000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 23 17:58:43 np0005593912.novalocal systemd[1]: Starting Hostname Service...
Jan 23 17:58:43 np0005593912.novalocal systemd[1]: Started Hostname Service.
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8285] hostname: hostname: using hostnamed
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8289] hostname: static hostname changed from (none) to "np0005593912.novalocal"
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8295] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8301] manager[0x55a66d0c9000]: rfkill: Wi-Fi hardware radio set enabled
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8301] manager[0x55a66d0c9000]: rfkill: WWAN hardware radio set enabled
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8331] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8331] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8332] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8333] manager: Networking is enabled by state file
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8336] settings: Loaded settings plugin: keyfile (internal)
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8340] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8368] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8378] dhcp: init: Using DHCP client 'internal'
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8380] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8386] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8392] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8400] device (lo): Activation: starting connection 'lo' (8f9a105d-dccb-4e29-a947-e22575a1eaec)
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8407] device (eth0): carrier: link connected
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8411] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8415] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8416] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8421] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8428] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8435] device (eth1): carrier: link connected
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8439] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8443] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (bc036e08-846c-34c0-9a7c-cba0a7407da5) (indicated)
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8444] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8450] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8456] device (eth1): Activation: starting connection 'Wired connection 1' (bc036e08-846c-34c0-9a7c-cba0a7407da5)
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8463] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 23 17:58:43 np0005593912.novalocal systemd[1]: Started Network Manager.
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8468] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8469] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8470] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8472] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8473] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8476] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8477] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8480] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8484] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8486] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8494] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8495] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8520] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8521] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8526] device (lo): Activation: successful, device activated.
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8532] dhcp4 (eth0): state changed new lease, address=38.129.56.100
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8537] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 23 17:58:43 np0005593912.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8594] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8615] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8616] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8619] manager: NetworkManager state is now CONNECTED_SITE
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8622] device (eth0): Activation: successful, device activated.
Jan 23 17:58:43 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191123.8625] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 23 17:58:43 np0005593912.novalocal sudo[7180]: pam_unix(sudo:session): session closed for user root
Jan 23 17:58:44 np0005593912.novalocal python3[7266]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-e577-0abb-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 17:58:53 np0005593912.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 17:59:11 np0005593912.novalocal systemd[4309]: Starting Mark boot as successful...
Jan 23 17:59:11 np0005593912.novalocal systemd[4309]: Finished Mark boot as successful.
Jan 23 17:59:13 np0005593912.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 17:59:28 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191168.9627] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 23 17:59:28 np0005593912.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 17:59:28 np0005593912.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 17:59:28 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191168.9937] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 23 17:59:28 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191168.9939] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 23 17:59:28 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191168.9945] device (eth1): Activation: successful, device activated.
Jan 23 17:59:28 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191168.9952] manager: startup complete
Jan 23 17:59:28 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191168.9955] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 23 17:59:28 np0005593912.novalocal NetworkManager[7193]: <warn>  [1769191168.9959] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 23 17:59:28 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191168.9966] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 23 17:59:28 np0005593912.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 23 17:59:29 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191169.0138] dhcp4 (eth1): canceled DHCP transaction
Jan 23 17:59:29 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191169.0139] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 23 17:59:29 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191169.0139] dhcp4 (eth1): state changed no lease
Jan 23 17:59:29 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191169.0153] policy: auto-activating connection 'ci-private-network' (dbab0fcc-2806-5bf7-af25-c9e9c0b22e49)
Jan 23 17:59:29 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191169.0157] device (eth1): Activation: starting connection 'ci-private-network' (dbab0fcc-2806-5bf7-af25-c9e9c0b22e49)
Jan 23 17:59:29 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191169.0158] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 17:59:29 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191169.0160] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 17:59:29 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191169.0166] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 17:59:29 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191169.0173] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 17:59:29 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191169.0208] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 17:59:29 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191169.0210] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 17:59:29 np0005593912.novalocal NetworkManager[7193]: <info>  [1769191169.0214] device (eth1): Activation: successful, device activated.
Jan 23 17:59:39 np0005593912.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 17:59:43 np0005593912.novalocal sudo[7370]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyrydcuhtoteqymaxauguamoqvflkuop ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 23 17:59:43 np0005593912.novalocal sudo[7370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 17:59:43 np0005593912.novalocal python3[7372]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 17:59:43 np0005593912.novalocal sudo[7370]: pam_unix(sudo:session): session closed for user root
Jan 23 17:59:43 np0005593912.novalocal sudo[7443]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noqamqlmelchydbjzcwrctlphvtxjmsa ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 23 17:59:43 np0005593912.novalocal sudo[7443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 17:59:43 np0005593912.novalocal python3[7445]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769191183.1941142-267-18537126110495/source _original_basename=tmp2eajpnso follow=False checksum=272d3ca166fa86a89e6559238548e457813fa32e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 17:59:43 np0005593912.novalocal sudo[7443]: pam_unix(sudo:session): session closed for user root
Jan 23 18:00:43 np0005593912.novalocal sshd-session[4318]: Received disconnect from 38.102.83.114 port 39936:11: disconnected by user
Jan 23 18:00:43 np0005593912.novalocal sshd-session[4318]: Disconnected from user zuul 38.102.83.114 port 39936
Jan 23 18:00:43 np0005593912.novalocal sshd-session[4305]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:00:43 np0005593912.novalocal systemd-logind[792]: Session 1 logged out. Waiting for processes to exit.
Jan 23 18:01:01 np0005593912.novalocal CROND[7471]: (root) CMD (run-parts /etc/cron.hourly)
Jan 23 18:01:01 np0005593912.novalocal run-parts[7474]: (/etc/cron.hourly) starting 0anacron
Jan 23 18:01:01 np0005593912.novalocal anacron[7482]: Anacron started on 2026-01-23
Jan 23 18:01:01 np0005593912.novalocal anacron[7482]: Will run job `cron.daily' in 36 min.
Jan 23 18:01:01 np0005593912.novalocal anacron[7482]: Will run job `cron.weekly' in 56 min.
Jan 23 18:01:01 np0005593912.novalocal anacron[7482]: Will run job `cron.monthly' in 76 min.
Jan 23 18:01:01 np0005593912.novalocal anacron[7482]: Jobs will be executed sequentially
Jan 23 18:01:01 np0005593912.novalocal run-parts[7484]: (/etc/cron.hourly) finished 0anacron
Jan 23 18:01:01 np0005593912.novalocal CROND[7470]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 23 18:02:11 np0005593912.novalocal systemd[4309]: Created slice User Background Tasks Slice.
Jan 23 18:02:11 np0005593912.novalocal systemd[4309]: Starting Cleanup of User's Temporary Files and Directories...
Jan 23 18:02:11 np0005593912.novalocal systemd[4309]: Finished Cleanup of User's Temporary Files and Directories.
Jan 23 18:06:50 np0005593912.novalocal sshd-session[7490]: Accepted publickey for zuul from 38.102.83.114 port 55628 ssh2: RSA SHA256:ZYPc6R6mrhDH6kGu4oaZ6KY9jveRJgYu3lvZrCF0Dxw
Jan 23 18:06:50 np0005593912.novalocal systemd-logind[792]: New session 3 of user zuul.
Jan 23 18:06:50 np0005593912.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 23 18:06:50 np0005593912.novalocal sshd-session[7490]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:06:50 np0005593912.novalocal sudo[7517]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flpbuscpdckqdtkvaoyujitsdsidoadc ; /usr/bin/python3'
Jan 23 18:06:50 np0005593912.novalocal sudo[7517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:06:51 np0005593912.novalocal python3[7519]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-0b63-4045-000000002169-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:06:51 np0005593912.novalocal sudo[7517]: pam_unix(sudo:session): session closed for user root
Jan 23 18:06:51 np0005593912.novalocal sudo[7545]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtkfbstccotcfpddvqaosbqtexiifxmu ; /usr/bin/python3'
Jan 23 18:06:51 np0005593912.novalocal sudo[7545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:06:51 np0005593912.novalocal python3[7547]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:06:51 np0005593912.novalocal sudo[7545]: pam_unix(sudo:session): session closed for user root
Jan 23 18:06:51 np0005593912.novalocal sudo[7571]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llhpzkpmfsxzxkodorktklliwrddedsh ; /usr/bin/python3'
Jan 23 18:06:51 np0005593912.novalocal sudo[7571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:06:51 np0005593912.novalocal python3[7573]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:06:51 np0005593912.novalocal sudo[7571]: pam_unix(sudo:session): session closed for user root
Jan 23 18:06:51 np0005593912.novalocal sudo[7598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnpyspscggjjzztxymkdaufscpyiblwf ; /usr/bin/python3'
Jan 23 18:06:51 np0005593912.novalocal sudo[7598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:06:51 np0005593912.novalocal python3[7600]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:06:51 np0005593912.novalocal sudo[7598]: pam_unix(sudo:session): session closed for user root
Jan 23 18:06:52 np0005593912.novalocal sudo[7624]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwadadzmywhjcwjpwenisjxjwpfzgqbm ; /usr/bin/python3'
Jan 23 18:06:52 np0005593912.novalocal sudo[7624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:06:52 np0005593912.novalocal python3[7626]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:06:52 np0005593912.novalocal sudo[7624]: pam_unix(sudo:session): session closed for user root
Jan 23 18:06:52 np0005593912.novalocal sudo[7650]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxwzkujlafbnwoooawdiztugydylwhoi ; /usr/bin/python3'
Jan 23 18:06:52 np0005593912.novalocal sudo[7650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:06:52 np0005593912.novalocal python3[7652]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:06:52 np0005593912.novalocal sudo[7650]: pam_unix(sudo:session): session closed for user root
Jan 23 18:06:53 np0005593912.novalocal sudo[7728]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uftwwzprcvmwkufeyxwgnvttppyhfbnx ; /usr/bin/python3'
Jan 23 18:06:53 np0005593912.novalocal sudo[7728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:06:53 np0005593912.novalocal python3[7730]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 18:06:53 np0005593912.novalocal sudo[7728]: pam_unix(sudo:session): session closed for user root
Jan 23 18:06:53 np0005593912.novalocal sudo[7801]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kamcpnqgmjomahzqegwqleopqpgjyhpb ; /usr/bin/python3'
Jan 23 18:06:53 np0005593912.novalocal sudo[7801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:06:53 np0005593912.novalocal python3[7803]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769191612.917961-501-103850187957218/source _original_basename=tmp6_qex7po follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:06:53 np0005593912.novalocal sudo[7801]: pam_unix(sudo:session): session closed for user root
Jan 23 18:06:54 np0005593912.novalocal sudo[7851]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snuyqokjihlzhjiyhuvrobafjeflehqg ; /usr/bin/python3'
Jan 23 18:06:54 np0005593912.novalocal sudo[7851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:06:55 np0005593912.novalocal python3[7853]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 18:06:55 np0005593912.novalocal systemd[1]: Reloading.
Jan 23 18:06:55 np0005593912.novalocal systemd-rc-local-generator[7875]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:06:55 np0005593912.novalocal sudo[7851]: pam_unix(sudo:session): session closed for user root
Jan 23 18:06:56 np0005593912.novalocal sudo[7907]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljgpejtmabxzkwyjsgepnyosfhsgtpow ; /usr/bin/python3'
Jan 23 18:06:56 np0005593912.novalocal sudo[7907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:06:56 np0005593912.novalocal python3[7909]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 23 18:06:56 np0005593912.novalocal sudo[7907]: pam_unix(sudo:session): session closed for user root
Jan 23 18:06:56 np0005593912.novalocal sudo[7933]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fngjxbwbbmdwkuuiunscztphnmaccrby ; /usr/bin/python3'
Jan 23 18:06:56 np0005593912.novalocal sudo[7933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:06:57 np0005593912.novalocal python3[7935]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:06:57 np0005593912.novalocal sudo[7933]: pam_unix(sudo:session): session closed for user root
Jan 23 18:06:57 np0005593912.novalocal sudo[7961]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvnhcgjroqhelrwjgvkzibsmweronpvb ; /usr/bin/python3'
Jan 23 18:06:57 np0005593912.novalocal sudo[7961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:06:57 np0005593912.novalocal python3[7963]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:06:57 np0005593912.novalocal sudo[7961]: pam_unix(sudo:session): session closed for user root
Jan 23 18:06:57 np0005593912.novalocal sudo[7989]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbippyhmiypncdvdactxjhkwsteaoyir ; /usr/bin/python3'
Jan 23 18:06:57 np0005593912.novalocal sudo[7989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:06:57 np0005593912.novalocal python3[7991]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:06:57 np0005593912.novalocal sudo[7989]: pam_unix(sudo:session): session closed for user root
Jan 23 18:06:57 np0005593912.novalocal sudo[8017]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-migpqnclndbwkbipzursupidvpbfpzer ; /usr/bin/python3'
Jan 23 18:06:57 np0005593912.novalocal sudo[8017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:06:57 np0005593912.novalocal python3[8019]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:06:57 np0005593912.novalocal sudo[8017]: pam_unix(sudo:session): session closed for user root
Jan 23 18:06:58 np0005593912.novalocal python3[8046]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-0b63-4045-000000002170-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:06:59 np0005593912.novalocal python3[8076]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 18:07:01 np0005593912.novalocal sshd-session[7493]: Connection closed by 38.102.83.114 port 55628
Jan 23 18:07:01 np0005593912.novalocal sshd-session[7490]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:07:01 np0005593912.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 23 18:07:01 np0005593912.novalocal systemd[1]: session-3.scope: Consumed 4.169s CPU time.
Jan 23 18:07:01 np0005593912.novalocal systemd-logind[792]: Session 3 logged out. Waiting for processes to exit.
Jan 23 18:07:01 np0005593912.novalocal systemd-logind[792]: Removed session 3.
Jan 23 18:07:01 np0005593912.novalocal irqbalance[786]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 23 18:07:01 np0005593912.novalocal irqbalance[786]: IRQ 27 affinity is now unmanaged
Jan 23 18:07:03 np0005593912.novalocal sshd-session[8081]: Accepted publickey for zuul from 38.102.83.114 port 56214 ssh2: RSA SHA256:ZYPc6R6mrhDH6kGu4oaZ6KY9jveRJgYu3lvZrCF0Dxw
Jan 23 18:07:03 np0005593912.novalocal systemd-logind[792]: New session 4 of user zuul.
Jan 23 18:07:03 np0005593912.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 23 18:07:03 np0005593912.novalocal sshd-session[8081]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:07:03 np0005593912.novalocal sudo[8108]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kydnjrmmvtecvptzqfrfnfkpzsqrvnab ; /usr/bin/python3'
Jan 23 18:07:03 np0005593912.novalocal sudo[8108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:07:03 np0005593912.novalocal python3[8110]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 23 18:07:11 np0005593912.novalocal setsebool[8152]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 23 18:07:11 np0005593912.novalocal setsebool[8152]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 23 18:07:26 np0005593912.novalocal kernel: SELinux:  Converting 386 SID table entries...
Jan 23 18:07:26 np0005593912.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 18:07:26 np0005593912.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 23 18:07:26 np0005593912.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 18:07:26 np0005593912.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 23 18:07:26 np0005593912.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 18:07:26 np0005593912.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 18:07:26 np0005593912.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 18:07:38 np0005593912.novalocal kernel: SELinux:  Converting 389 SID table entries...
Jan 23 18:07:38 np0005593912.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 18:07:38 np0005593912.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 23 18:07:38 np0005593912.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 18:07:38 np0005593912.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 23 18:07:38 np0005593912.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 18:07:38 np0005593912.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 18:07:38 np0005593912.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 18:07:59 np0005593912.novalocal dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 23 18:07:59 np0005593912.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 18:07:59 np0005593912.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 23 18:07:59 np0005593912.novalocal systemd[1]: Reloading.
Jan 23 18:07:59 np0005593912.novalocal systemd-rc-local-generator[8919]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:07:59 np0005593912.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 18:08:04 np0005593912.novalocal sudo[8108]: pam_unix(sudo:session): session closed for user root
Jan 23 18:08:07 np0005593912.novalocal python3[13154]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163efc-24cc-e191-9ac3-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:08:08 np0005593912.novalocal kernel: evm: overlay not supported
Jan 23 18:08:08 np0005593912.novalocal systemd[4309]: Starting D-Bus User Message Bus...
Jan 23 18:08:08 np0005593912.novalocal dbus-broker-launch[13995]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 23 18:08:08 np0005593912.novalocal dbus-broker-launch[13995]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 23 18:08:08 np0005593912.novalocal systemd[4309]: Started D-Bus User Message Bus.
Jan 23 18:08:08 np0005593912.novalocal dbus-broker-lau[13995]: Ready
Jan 23 18:08:08 np0005593912.novalocal systemd[4309]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 23 18:08:08 np0005593912.novalocal systemd[4309]: Created slice Slice /user.
Jan 23 18:08:08 np0005593912.novalocal systemd[4309]: podman-13975.scope: unit configures an IP firewall, but not running as root.
Jan 23 18:08:08 np0005593912.novalocal systemd[4309]: (This warning is only shown for the first unit using IP firewalling.)
Jan 23 18:08:08 np0005593912.novalocal systemd[4309]: Started podman-13975.scope.
Jan 23 18:08:09 np0005593912.novalocal systemd[4309]: Started podman-pause-7dcac07d.scope.
Jan 23 18:08:09 np0005593912.novalocal sudo[14244]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abmvjymwattlakfgaveyaxptnmycjsqh ; /usr/bin/python3'
Jan 23 18:08:09 np0005593912.novalocal sudo[14244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:08:09 np0005593912.novalocal python3[14252]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.166:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.166:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:08:09 np0005593912.novalocal python3[14252]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 23 18:08:09 np0005593912.novalocal sudo[14244]: pam_unix(sudo:session): session closed for user root
Jan 23 18:08:10 np0005593912.novalocal sshd-session[8084]: Connection closed by 38.102.83.114 port 56214
Jan 23 18:08:10 np0005593912.novalocal sshd-session[8081]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:08:10 np0005593912.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 23 18:08:10 np0005593912.novalocal systemd[1]: session-4.scope: Consumed 51.653s CPU time.
Jan 23 18:08:10 np0005593912.novalocal systemd-logind[792]: Session 4 logged out. Waiting for processes to exit.
Jan 23 18:08:10 np0005593912.novalocal systemd-logind[792]: Removed session 4.
Jan 23 18:08:30 np0005593912.novalocal sshd-session[22297]: Connection closed by 38.129.56.53 port 56042 [preauth]
Jan 23 18:08:30 np0005593912.novalocal sshd-session[22300]: Unable to negotiate with 38.129.56.53 port 56060: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 23 18:08:30 np0005593912.novalocal sshd-session[22298]: Connection closed by 38.129.56.53 port 56056 [preauth]
Jan 23 18:08:30 np0005593912.novalocal sshd-session[22302]: Unable to negotiate with 38.129.56.53 port 56064: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 23 18:08:30 np0005593912.novalocal sshd-session[22304]: Unable to negotiate with 38.129.56.53 port 56070: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 23 18:08:35 np0005593912.novalocal sshd-session[24356]: Accepted publickey for zuul from 38.102.83.114 port 50174 ssh2: RSA SHA256:ZYPc6R6mrhDH6kGu4oaZ6KY9jveRJgYu3lvZrCF0Dxw
Jan 23 18:08:35 np0005593912.novalocal systemd-logind[792]: New session 5 of user zuul.
Jan 23 18:08:35 np0005593912.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 23 18:08:35 np0005593912.novalocal sshd-session[24356]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:08:35 np0005593912.novalocal python3[24467]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBII3DpZWwSyncSr0hn9fTQLYoulrnI3thGGGKUcN27hkAxiQmMLCKxEfHwZj4NtRUQuDZPT2KRL2E+ftBFsWv/0= zuul@np0005593911.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 18:08:35 np0005593912.novalocal sudo[24667]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leqbyurotminecclqlwzatmholfntnbb ; /usr/bin/python3'
Jan 23 18:08:35 np0005593912.novalocal sudo[24667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:08:36 np0005593912.novalocal python3[24680]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBII3DpZWwSyncSr0hn9fTQLYoulrnI3thGGGKUcN27hkAxiQmMLCKxEfHwZj4NtRUQuDZPT2KRL2E+ftBFsWv/0= zuul@np0005593911.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 18:08:36 np0005593912.novalocal sudo[24667]: pam_unix(sudo:session): session closed for user root
Jan 23 18:08:36 np0005593912.novalocal sudo[25148]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btyfcxtylrrolvwqyrpbtdtgbuilaqja ; /usr/bin/python3'
Jan 23 18:08:36 np0005593912.novalocal sudo[25148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:08:37 np0005593912.novalocal python3[25156]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005593912.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 23 18:08:37 np0005593912.novalocal useradd[25233]: new group: name=cloud-admin, GID=1002
Jan 23 18:08:37 np0005593912.novalocal useradd[25233]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 23 18:08:37 np0005593912.novalocal sudo[25148]: pam_unix(sudo:session): session closed for user root
Jan 23 18:08:37 np0005593912.novalocal sudo[25370]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftdhevibvqdfkqfucvhaaglvjfkjbazn ; /usr/bin/python3'
Jan 23 18:08:37 np0005593912.novalocal sudo[25370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:08:37 np0005593912.novalocal python3[25380]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBII3DpZWwSyncSr0hn9fTQLYoulrnI3thGGGKUcN27hkAxiQmMLCKxEfHwZj4NtRUQuDZPT2KRL2E+ftBFsWv/0= zuul@np0005593911.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 18:08:37 np0005593912.novalocal sudo[25370]: pam_unix(sudo:session): session closed for user root
Jan 23 18:08:37 np0005593912.novalocal sudo[25689]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-melssalmuqogjzyfcomievouptdjydub ; /usr/bin/python3'
Jan 23 18:08:37 np0005593912.novalocal sudo[25689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:08:37 np0005593912.novalocal python3[25699]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 18:08:38 np0005593912.novalocal sudo[25689]: pam_unix(sudo:session): session closed for user root
Jan 23 18:08:38 np0005593912.novalocal sudo[25984]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jewjhwyzfxiqswnnjkdeyzogwvjzbecc ; /usr/bin/python3'
Jan 23 18:08:38 np0005593912.novalocal sudo[25984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:08:38 np0005593912.novalocal python3[25993]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769191717.7214077-135-109658245427263/source _original_basename=tmpw1vrabxx follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:08:38 np0005593912.novalocal sudo[25984]: pam_unix(sudo:session): session closed for user root
Jan 23 18:08:39 np0005593912.novalocal sudo[26366]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koxffbpwibnejvujkxjhhikvlcxlonjx ; /usr/bin/python3'
Jan 23 18:08:39 np0005593912.novalocal sudo[26366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:08:39 np0005593912.novalocal python3[26375]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 23 18:08:39 np0005593912.novalocal systemd[1]: Starting Hostname Service...
Jan 23 18:08:39 np0005593912.novalocal systemd[1]: Started Hostname Service.
Jan 23 18:08:39 np0005593912.novalocal systemd-hostnamed[26507]: Changed pretty hostname to 'compute-0'
Jan 23 18:08:39 compute-0 systemd-hostnamed[26507]: Hostname set to <compute-0> (static)
Jan 23 18:08:39 compute-0 NetworkManager[7193]: <info>  [1769191719.5667] hostname: static hostname changed from "np0005593912.novalocal" to "compute-0"
Jan 23 18:08:39 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 18:08:39 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 18:08:39 compute-0 sudo[26366]: pam_unix(sudo:session): session closed for user root
Jan 23 18:08:40 compute-0 sshd-session[24406]: Connection closed by 38.102.83.114 port 50174
Jan 23 18:08:40 compute-0 sshd-session[24356]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:08:40 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Jan 23 18:08:40 compute-0 systemd[1]: session-5.scope: Consumed 2.357s CPU time.
Jan 23 18:08:40 compute-0 systemd-logind[792]: Session 5 logged out. Waiting for processes to exit.
Jan 23 18:08:40 compute-0 systemd-logind[792]: Removed session 5.
Jan 23 18:08:49 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 18:08:49 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 18:08:49 compute-0 systemd[1]: man-db-cache-update.service: Consumed 55.508s CPU time.
Jan 23 18:08:49 compute-0 systemd[1]: run-r3343f99e971b4f1b9ee30e6372d93745.service: Deactivated successfully.
Jan 23 18:08:49 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 18:09:09 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 18:10:11 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 23 18:10:11 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 23 18:10:11 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 23 18:10:11 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 23 18:13:54 compute-0 sshd-session[29945]: Accepted publickey for zuul from 38.129.56.53 port 52458 ssh2: RSA SHA256:ZYPc6R6mrhDH6kGu4oaZ6KY9jveRJgYu3lvZrCF0Dxw
Jan 23 18:13:54 compute-0 systemd-logind[792]: New session 6 of user zuul.
Jan 23 18:13:54 compute-0 systemd[1]: Started Session 6 of User zuul.
Jan 23 18:13:54 compute-0 sshd-session[29945]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:13:55 compute-0 python3[30021]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:13:56 compute-0 sudo[30135]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkhjiydwozghwfewgpttpqzypvejqplr ; /usr/bin/python3'
Jan 23 18:13:56 compute-0 sudo[30135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:13:57 compute-0 python3[30137]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 18:13:57 compute-0 sudo[30135]: pam_unix(sudo:session): session closed for user root
Jan 23 18:13:57 compute-0 sudo[30208]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pynvndohrqhidpamxutgjdwviyzkrjkw ; /usr/bin/python3'
Jan 23 18:13:57 compute-0 sudo[30208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:13:57 compute-0 python3[30210]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769192036.7153242-33583-126017433162284/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:13:57 compute-0 sudo[30208]: pam_unix(sudo:session): session closed for user root
Jan 23 18:13:57 compute-0 sudo[30234]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihczrmkmungpiaxxvcvhhccxrylivplg ; /usr/bin/python3'
Jan 23 18:13:57 compute-0 sudo[30234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:13:57 compute-0 python3[30236]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 18:13:57 compute-0 sudo[30234]: pam_unix(sudo:session): session closed for user root
Jan 23 18:13:57 compute-0 sudo[30307]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqrwrkdhcshmhhfgsjgvtybvxqrnfsjw ; /usr/bin/python3'
Jan 23 18:13:57 compute-0 sudo[30307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:13:58 compute-0 python3[30309]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769192036.7153242-33583-126017433162284/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:13:58 compute-0 sudo[30307]: pam_unix(sudo:session): session closed for user root
Jan 23 18:13:58 compute-0 sudo[30333]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sagepmvwyvacfdvxswvseqnvdferuvwe ; /usr/bin/python3'
Jan 23 18:13:58 compute-0 sudo[30333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:13:58 compute-0 python3[30335]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 18:13:58 compute-0 sudo[30333]: pam_unix(sudo:session): session closed for user root
Jan 23 18:13:58 compute-0 sudo[30406]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmprwismodxdtmgmcwwmmezsiamvnada ; /usr/bin/python3'
Jan 23 18:13:58 compute-0 sudo[30406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:13:58 compute-0 python3[30408]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769192036.7153242-33583-126017433162284/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:13:58 compute-0 sudo[30406]: pam_unix(sudo:session): session closed for user root
Jan 23 18:13:58 compute-0 sudo[30432]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmisddenclbqsqhbwrqgnerlhlxqgndn ; /usr/bin/python3'
Jan 23 18:13:58 compute-0 sudo[30432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:13:58 compute-0 python3[30434]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 18:13:58 compute-0 sudo[30432]: pam_unix(sudo:session): session closed for user root
Jan 23 18:13:59 compute-0 sudo[30505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frlxjfpxpgrygxnczftrzksuqfvrzsui ; /usr/bin/python3'
Jan 23 18:13:59 compute-0 sudo[30505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:13:59 compute-0 python3[30507]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769192036.7153242-33583-126017433162284/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:13:59 compute-0 sudo[30505]: pam_unix(sudo:session): session closed for user root
Jan 23 18:13:59 compute-0 sudo[30531]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qocrnotwjvvebjggolpmmqzadehzmsum ; /usr/bin/python3'
Jan 23 18:13:59 compute-0 sudo[30531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:13:59 compute-0 python3[30533]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 18:13:59 compute-0 sudo[30531]: pam_unix(sudo:session): session closed for user root
Jan 23 18:13:59 compute-0 sudo[30604]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfxbtqxsbawjittjykjriutvfrzqzmwn ; /usr/bin/python3'
Jan 23 18:13:59 compute-0 sudo[30604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:13:59 compute-0 python3[30606]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769192036.7153242-33583-126017433162284/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:13:59 compute-0 sudo[30604]: pam_unix(sudo:session): session closed for user root
Jan 23 18:13:59 compute-0 sudo[30630]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lldmnivocvagdvbkipmqggyiiftctrpe ; /usr/bin/python3'
Jan 23 18:13:59 compute-0 sudo[30630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:13:59 compute-0 python3[30632]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 18:13:59 compute-0 sudo[30630]: pam_unix(sudo:session): session closed for user root
Jan 23 18:14:00 compute-0 sudo[30703]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrxqybhdwgbdvwzjhqznblpqqryhdxvm ; /usr/bin/python3'
Jan 23 18:14:00 compute-0 sudo[30703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:14:00 compute-0 python3[30705]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769192036.7153242-33583-126017433162284/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:14:00 compute-0 sudo[30703]: pam_unix(sudo:session): session closed for user root
Jan 23 18:14:00 compute-0 sudo[30729]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohiclhspmrzjfwccntgihrnpbzlibofh ; /usr/bin/python3'
Jan 23 18:14:00 compute-0 sudo[30729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:14:00 compute-0 python3[30731]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 18:14:00 compute-0 sudo[30729]: pam_unix(sudo:session): session closed for user root
Jan 23 18:14:00 compute-0 sudo[30802]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euuvzscnxrgfbcrychdeeujiknzhlhcw ; /usr/bin/python3'
Jan 23 18:14:00 compute-0 sudo[30802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:14:00 compute-0 python3[30804]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769192036.7153242-33583-126017433162284/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:14:00 compute-0 sudo[30802]: pam_unix(sudo:session): session closed for user root
Jan 23 18:14:03 compute-0 sshd-session[30830]: Connection closed by 192.168.122.11 port 39334 [preauth]
Jan 23 18:14:03 compute-0 sshd-session[30829]: Connection closed by 192.168.122.11 port 39330 [preauth]
Jan 23 18:14:03 compute-0 sshd-session[30831]: Unable to negotiate with 192.168.122.11 port 39342: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 23 18:14:03 compute-0 sshd-session[30832]: Unable to negotiate with 192.168.122.11 port 39350: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 23 18:14:03 compute-0 sshd-session[30833]: Unable to negotiate with 192.168.122.11 port 39360: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 23 18:14:13 compute-0 python3[30862]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:19:13 compute-0 sshd-session[29948]: Received disconnect from 38.129.56.53 port 52458:11: disconnected by user
Jan 23 18:19:13 compute-0 sshd-session[29948]: Disconnected from user zuul 38.129.56.53 port 52458
Jan 23 18:19:13 compute-0 sshd-session[29945]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:19:13 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 23 18:19:13 compute-0 systemd[1]: session-6.scope: Consumed 4.771s CPU time.
Jan 23 18:19:13 compute-0 systemd-logind[792]: Session 6 logged out. Waiting for processes to exit.
Jan 23 18:19:13 compute-0 systemd-logind[792]: Removed session 6.
Jan 23 18:22:46 compute-0 sshd-session[30869]: Connection closed by authenticating user root 221.126.252.82 port 6517 [preauth]
Jan 23 18:25:13 compute-0 sshd-session[30873]: Accepted publickey for zuul from 192.168.122.30 port 57854 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:25:13 compute-0 systemd-logind[792]: New session 7 of user zuul.
Jan 23 18:25:13 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 23 18:25:13 compute-0 sshd-session[30873]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:25:14 compute-0 sshd-session[31027]: error: kex_exchange_identification: read: Connection reset by peer
Jan 23 18:25:14 compute-0 sshd-session[31027]: Connection reset by 38.110.46.241 port 53888
Jan 23 18:25:14 compute-0 python3.9[31026]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:25:15 compute-0 sudo[31206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsakpzcwyaqooaweitmhmrxwlujgdmhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192715.5507147-27-129084447136863/AnsiballZ_command.py'
Jan 23 18:25:15 compute-0 sudo[31206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:25:16 compute-0 python3.9[31208]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:25:22 compute-0 sshd-session[31232]: Connection closed by authenticating user root 221.126.252.82 port 54438 [preauth]
Jan 23 18:25:27 compute-0 sudo[31206]: pam_unix(sudo:session): session closed for user root
Jan 23 18:25:27 compute-0 sshd-session[30876]: Connection closed by 192.168.122.30 port 57854
Jan 23 18:25:27 compute-0 sshd-session[30873]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:25:27 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 23 18:25:27 compute-0 systemd[1]: session-7.scope: Consumed 8.049s CPU time.
Jan 23 18:25:27 compute-0 systemd-logind[792]: Session 7 logged out. Waiting for processes to exit.
Jan 23 18:25:27 compute-0 systemd-logind[792]: Removed session 7.
Jan 23 18:25:42 compute-0 sshd-session[31269]: Accepted publickey for zuul from 192.168.122.30 port 43376 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:25:42 compute-0 systemd-logind[792]: New session 8 of user zuul.
Jan 23 18:25:42 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 23 18:25:42 compute-0 sshd-session[31269]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:25:43 compute-0 python3.9[31422]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 23 18:25:44 compute-0 python3.9[31596]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:25:45 compute-0 sudo[31746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkhfktlphbvcbllnrswwirvwhuutioxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192744.8951223-40-137161935729339/AnsiballZ_command.py'
Jan 23 18:25:45 compute-0 sudo[31746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:25:45 compute-0 python3.9[31748]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:25:45 compute-0 sudo[31746]: pam_unix(sudo:session): session closed for user root
Jan 23 18:25:46 compute-0 sudo[31899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzsvaglarxjvnjabhvtylonswpfzspfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192745.7493253-52-55394690694817/AnsiballZ_stat.py'
Jan 23 18:25:46 compute-0 sudo[31899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:25:46 compute-0 python3.9[31901]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:25:46 compute-0 sudo[31899]: pam_unix(sudo:session): session closed for user root
Jan 23 18:25:46 compute-0 sudo[32051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpmyocxkxjaxuccmbfmyazoxjwwbnjtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192746.517504-60-266302123062239/AnsiballZ_file.py'
Jan 23 18:25:46 compute-0 sudo[32051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:25:47 compute-0 python3.9[32053]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:25:47 compute-0 sudo[32051]: pam_unix(sudo:session): session closed for user root
Jan 23 18:25:47 compute-0 sudo[32203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdrcifvhynawqcwfhvgouyyqjwiwgrfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192747.3063622-68-239047044674811/AnsiballZ_stat.py'
Jan 23 18:25:47 compute-0 sudo[32203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:25:47 compute-0 python3.9[32205]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:25:47 compute-0 sudo[32203]: pam_unix(sudo:session): session closed for user root
Jan 23 18:25:48 compute-0 sudo[32326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikhkifylppuyqeuhtcqzuvcndwmmqhis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192747.3063622-68-239047044674811/AnsiballZ_copy.py'
Jan 23 18:25:48 compute-0 sudo[32326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:25:48 compute-0 python3.9[32328]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769192747.3063622-68-239047044674811/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:25:48 compute-0 sudo[32326]: pam_unix(sudo:session): session closed for user root
Jan 23 18:25:49 compute-0 sudo[32478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjlnwmfovrplfonvdthxmgmvtnvqvfss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192748.759849-83-23766433535717/AnsiballZ_setup.py'
Jan 23 18:25:49 compute-0 sudo[32478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:25:49 compute-0 python3.9[32480]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:25:49 compute-0 sudo[32478]: pam_unix(sudo:session): session closed for user root
Jan 23 18:25:49 compute-0 sudo[32634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mymeyiabjhdezzioyvtbjehglzpnmhlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192749.6821778-91-250949291159249/AnsiballZ_file.py'
Jan 23 18:25:49 compute-0 sudo[32634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:25:50 compute-0 python3.9[32636]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:25:50 compute-0 sudo[32634]: pam_unix(sudo:session): session closed for user root
Jan 23 18:25:50 compute-0 sudo[32786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msjfyeljjqgtvygaljsidpmyeauuwpkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192750.3247662-100-127277392059994/AnsiballZ_file.py'
Jan 23 18:25:50 compute-0 sudo[32786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:25:50 compute-0 python3.9[32788]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:25:50 compute-0 sudo[32786]: pam_unix(sudo:session): session closed for user root
Jan 23 18:25:51 compute-0 python3.9[32938]: ansible-ansible.builtin.service_facts Invoked
Jan 23 18:25:54 compute-0 python3.9[33191]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:25:55 compute-0 python3.9[33341]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:25:56 compute-0 python3.9[33495]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:25:57 compute-0 sudo[33651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qufkhfyqvumhxifwvwlagqyjvrxgekwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192756.9325461-148-77216732752506/AnsiballZ_setup.py'
Jan 23 18:25:57 compute-0 sudo[33651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:25:57 compute-0 python3.9[33653]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:25:57 compute-0 sudo[33651]: pam_unix(sudo:session): session closed for user root
Jan 23 18:25:58 compute-0 sudo[33735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsotdjtblrsokyxrzrhuhpoqpevgzbrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192756.9325461-148-77216732752506/AnsiballZ_dnf.py'
Jan 23 18:25:58 compute-0 sudo[33735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:25:58 compute-0 python3.9[33737]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:26:48 compute-0 systemd[1]: Reloading.
Jan 23 18:26:48 compute-0 systemd-rc-local-generator[33938]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:26:49 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 23 18:26:49 compute-0 systemd[1]: Reloading.
Jan 23 18:26:49 compute-0 systemd-rc-local-generator[33978]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:26:49 compute-0 systemd[1]: Starting dnf makecache...
Jan 23 18:26:49 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 23 18:26:49 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 23 18:26:49 compute-0 systemd[1]: Reloading.
Jan 23 18:26:49 compute-0 systemd-rc-local-generator[34020]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:26:49 compute-0 dnf[33988]: Failed determining last makecache time.
Jan 23 18:26:49 compute-0 dnf[33988]: delorean-openstack-barbican-42b4c41831408a8e323 134 kB/s | 3.0 kB     00:00
Jan 23 18:26:49 compute-0 dnf[33988]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 174 kB/s | 3.0 kB     00:00
Jan 23 18:26:49 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 23 18:26:49 compute-0 dnf[33988]: delorean-openstack-cinder-1c00d6490d88e436f26ef 160 kB/s | 3.0 kB     00:00
Jan 23 18:26:49 compute-0 dnf[33988]: delorean-python-stevedore-c4acc5639fd2329372142 169 kB/s | 3.0 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: delorean-python-cloudkitty-tests-tempest-2c80f8 165 kB/s | 3.0 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: delorean-os-refresh-config-9bfc52b5049be2d8de61 152 kB/s | 3.0 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 180 kB/s | 3.0 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: delorean-python-designate-tests-tempest-347fdbc 180 kB/s | 3.0 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: delorean-openstack-glance-1fd12c29b339f30fe823e 159 kB/s | 3.0 kB     00:00
Jan 23 18:26:50 compute-0 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Jan 23 18:26:50 compute-0 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Jan 23 18:26:50 compute-0 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Jan 23 18:26:50 compute-0 dnf[33988]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 144 kB/s | 3.0 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: delorean-openstack-manila-3c01b7181572c95dac462 158 kB/s | 3.0 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: delorean-python-whitebox-neutron-tests-tempest- 142 kB/s | 3.0 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: delorean-openstack-octavia-ba397f07a7331190208c 143 kB/s | 3.0 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: delorean-openstack-watcher-c014f81a8647287f6dcc 149 kB/s | 3.0 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: delorean-ansible-config_template-5ccaa22121a7ff 164 kB/s | 3.0 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 155 kB/s | 3.0 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: delorean-openstack-swift-dc98a8463506ac520c469a 127 kB/s | 3.0 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: delorean-python-tempestconf-8515371b7cceebd4282 155 kB/s | 3.0 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: delorean-openstack-heat-ui-013accbfd179753bc3f0 160 kB/s | 3.0 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: CentOS Stream 9 - BaseOS                         64 kB/s | 6.7 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: CentOS Stream 9 - AppStream                      63 kB/s | 6.8 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: CentOS Stream 9 - CRB                            64 kB/s | 6.6 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: CentOS Stream 9 - Extras packages                71 kB/s | 7.3 kB     00:00
Jan 23 18:26:50 compute-0 dnf[33988]: dlrn-antelope-testing                           136 kB/s | 3.0 kB     00:00
Jan 23 18:26:51 compute-0 dnf[33988]: dlrn-antelope-build-deps                        157 kB/s | 3.0 kB     00:00
Jan 23 18:26:51 compute-0 dnf[33988]: centos9-rabbitmq                                108 kB/s | 3.0 kB     00:00
Jan 23 18:26:51 compute-0 dnf[33988]: centos9-storage                                 136 kB/s | 3.0 kB     00:00
Jan 23 18:26:51 compute-0 dnf[33988]: centos9-opstools                                123 kB/s | 3.0 kB     00:00
Jan 23 18:26:51 compute-0 dnf[33988]: NFV SIG OpenvSwitch                              17 kB/s | 3.0 kB     00:00
Jan 23 18:26:51 compute-0 dnf[33988]: repo-setup-centos-appstream                     166 kB/s | 4.4 kB     00:00
Jan 23 18:26:51 compute-0 dnf[33988]: repo-setup-centos-baseos                        146 kB/s | 3.9 kB     00:00
Jan 23 18:26:51 compute-0 dnf[33988]: repo-setup-centos-highavailability              138 kB/s | 3.9 kB     00:00
Jan 23 18:26:51 compute-0 dnf[33988]: repo-setup-centos-powertools                    172 kB/s | 4.3 kB     00:00
Jan 23 18:26:51 compute-0 dnf[33988]: Extra Packages for Enterprise Linux 9 - x86_64  183 kB/s |  30 kB     00:00
Jan 23 18:26:52 compute-0 dnf[33988]: Metadata cache created.
Jan 23 18:26:52 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 23 18:26:52 compute-0 systemd[1]: Finished dnf makecache.
Jan 23 18:26:52 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.881s CPU time.
Jan 23 18:28:01 compute-0 kernel: SELinux:  Converting 2724 SID table entries...
Jan 23 18:28:01 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 18:28:01 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 23 18:28:01 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 18:28:01 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 23 18:28:01 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 18:28:01 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 18:28:01 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 18:28:02 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 23 18:28:02 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 18:28:02 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 18:28:02 compute-0 systemd[1]: Reloading.
Jan 23 18:28:02 compute-0 systemd-rc-local-generator[34393]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:28:02 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 18:28:03 compute-0 sudo[33735]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:03 compute-0 sudo[35304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmmfxbwxdcuuqcwfvwzacltyccfdmcbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192883.280287-160-221840690391381/AnsiballZ_command.py'
Jan 23 18:28:03 compute-0 sudo[35304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:03 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 18:28:03 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 18:28:03 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.307s CPU time.
Jan 23 18:28:03 compute-0 systemd[1]: run-rf68dd95374c9475eb4624072be321a14.service: Deactivated successfully.
Jan 23 18:28:03 compute-0 python3.9[35306]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:28:04 compute-0 sudo[35304]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:05 compute-0 sudo[35586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnimosfqzdsrxbmvtcaotctnwewzrmyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192884.6921642-168-174689743333768/AnsiballZ_selinux.py'
Jan 23 18:28:05 compute-0 sudo[35586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:05 compute-0 python3.9[35588]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 23 18:28:05 compute-0 sudo[35586]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:06 compute-0 sudo[35738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lybpfpwvpcxogjvzbdwisvyyfbuddmmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192885.8793402-179-22924650732480/AnsiballZ_command.py'
Jan 23 18:28:06 compute-0 sudo[35738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:06 compute-0 python3.9[35740]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 23 18:28:07 compute-0 sudo[35738]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:07 compute-0 sudo[35891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkwfosqirwioxhteutnokotjuxsmepwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192887.465117-187-123293244209885/AnsiballZ_file.py'
Jan 23 18:28:07 compute-0 sudo[35891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:11 compute-0 python3.9[35893]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:28:11 compute-0 sudo[35891]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:11 compute-0 sudo[36045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eubbwfrtzsppidlcidybpbdjzpbwwles ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192891.409966-195-167395397622465/AnsiballZ_mount.py'
Jan 23 18:28:11 compute-0 sudo[36045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:12 compute-0 python3.9[36047]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 23 18:28:12 compute-0 sudo[36045]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:13 compute-0 sudo[36197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpxrjktfkfdhctyhmlwlubkaowbaoaeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192892.7954433-223-60728904083288/AnsiballZ_file.py'
Jan 23 18:28:13 compute-0 sudo[36197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:15 compute-0 python3.9[36199]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:28:15 compute-0 sudo[36197]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:16 compute-0 sudo[36349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uskjfziixcoeiujfnjtfoephughfhfke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192895.8615363-231-266956258788116/AnsiballZ_stat.py'
Jan 23 18:28:16 compute-0 sudo[36349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:16 compute-0 python3.9[36351]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:28:16 compute-0 sudo[36349]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:16 compute-0 sudo[36472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnrgwrhcxyfpgcrtdxjicyowwlsaoxpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192895.8615363-231-266956258788116/AnsiballZ_copy.py'
Jan 23 18:28:16 compute-0 sudo[36472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:20 compute-0 python3.9[36474]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769192895.8615363-231-266956258788116/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ad138ff1d14b381863f6932c876cfdf53601960 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:28:20 compute-0 sudo[36472]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:20 compute-0 sudo[36624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikfmlmwrooaklijtkhqhpznamxhchdqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192900.7151046-255-121730389516991/AnsiballZ_stat.py'
Jan 23 18:28:20 compute-0 sudo[36624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:21 compute-0 python3.9[36626]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:28:21 compute-0 sudo[36624]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:21 compute-0 sudo[36776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dytajikzpusyoijlmqhntbnjgaxuxnkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192901.3661966-263-30522345268667/AnsiballZ_command.py'
Jan 23 18:28:21 compute-0 sudo[36776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:21 compute-0 python3.9[36778]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:28:21 compute-0 sudo[36776]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:22 compute-0 sudo[36929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryiueclhhwlhlbzxqebjekjmqakjaadv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192902.0818539-271-200711363898422/AnsiballZ_file.py'
Jan 23 18:28:22 compute-0 sudo[36929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:23 compute-0 python3.9[36931]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:28:23 compute-0 sudo[36929]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:24 compute-0 sudo[37081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiezfpoxulmehfoprmqbajtkbgxtnhqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192904.1998017-282-132057166110253/AnsiballZ_getent.py'
Jan 23 18:28:24 compute-0 sudo[37081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:24 compute-0 python3.9[37083]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 23 18:28:24 compute-0 sudo[37081]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:24 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 18:28:25 compute-0 sudo[37235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oufisokgtwllmhndhikcmxfspjizluca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192905.035358-290-132069916454995/AnsiballZ_group.py'
Jan 23 18:28:25 compute-0 sudo[37235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:25 compute-0 python3.9[37237]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 18:28:25 compute-0 groupadd[37238]: group added to /etc/group: name=qemu, GID=107
Jan 23 18:28:25 compute-0 groupadd[37238]: group added to /etc/gshadow: name=qemu
Jan 23 18:28:25 compute-0 groupadd[37238]: new group: name=qemu, GID=107
Jan 23 18:28:25 compute-0 sudo[37235]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:26 compute-0 sudo[37393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pigbrmwmrzancxvxmebwdvzfguxnwter ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192905.8660307-298-70293531158410/AnsiballZ_user.py'
Jan 23 18:28:26 compute-0 sudo[37393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:26 compute-0 python3.9[37395]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 23 18:28:26 compute-0 useradd[37397]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 23 18:28:26 compute-0 sudo[37393]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:27 compute-0 sudo[37553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnzrnylnfrjnnkxmgysbkrpqhjtpcdjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192906.7720969-306-151127369808127/AnsiballZ_getent.py'
Jan 23 18:28:27 compute-0 sudo[37553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:27 compute-0 python3.9[37555]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 23 18:28:27 compute-0 sudo[37553]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:27 compute-0 sudo[37706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvvbuowmecbaibbovlgzhxpidsnqdxng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192907.4482107-314-235538877498434/AnsiballZ_group.py'
Jan 23 18:28:27 compute-0 sudo[37706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:27 compute-0 python3.9[37708]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 18:28:27 compute-0 groupadd[37709]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 23 18:28:27 compute-0 groupadd[37709]: group added to /etc/gshadow: name=hugetlbfs
Jan 23 18:28:27 compute-0 groupadd[37709]: new group: name=hugetlbfs, GID=42477
Jan 23 18:28:28 compute-0 sudo[37706]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:28 compute-0 sudo[37864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdscdxjsibxilwmqaxmhwfdrcwnqszru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192908.2250278-323-21522774228775/AnsiballZ_file.py'
Jan 23 18:28:28 compute-0 sudo[37864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:28 compute-0 python3.9[37866]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 23 18:28:28 compute-0 sudo[37864]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:29 compute-0 sudo[38016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akyuwhidmwyswuuivbylefmvflrnfsyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192909.0092928-334-161580812684098/AnsiballZ_dnf.py'
Jan 23 18:28:29 compute-0 sudo[38016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:29 compute-0 python3.9[38018]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:28:31 compute-0 sudo[38016]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:31 compute-0 sudo[38170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pquzwrautrrnktlldeuancmkishqkeby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192911.4023676-342-110147207429805/AnsiballZ_file.py'
Jan 23 18:28:31 compute-0 sudo[38170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:31 compute-0 python3.9[38172]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:28:31 compute-0 sudo[38170]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:32 compute-0 sudo[38322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tobnwjxqvihiuratjbumebxtqamsitcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192912.0463367-350-170786830660410/AnsiballZ_stat.py'
Jan 23 18:28:32 compute-0 sudo[38322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:32 compute-0 python3.9[38324]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:28:32 compute-0 sudo[38322]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:32 compute-0 sudo[38445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpydowrhxbrvbqbvwechqrkaugtpdndl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192912.0463367-350-170786830660410/AnsiballZ_copy.py'
Jan 23 18:28:32 compute-0 sudo[38445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:33 compute-0 python3.9[38447]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769192912.0463367-350-170786830660410/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:28:33 compute-0 sudo[38445]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:33 compute-0 sudo[38597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjregqhackvyscvvyeojvxvauevgmwvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192913.2267575-365-89756576018767/AnsiballZ_systemd.py'
Jan 23 18:28:33 compute-0 sudo[38597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:34 compute-0 python3.9[38599]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 18:28:34 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 23 18:28:34 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 23 18:28:34 compute-0 kernel: Bridge firewalling registered
Jan 23 18:28:34 compute-0 systemd-modules-load[38603]: Inserted module 'br_netfilter'
Jan 23 18:28:34 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 23 18:28:34 compute-0 sudo[38597]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:34 compute-0 sudo[38756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egzbtwkujtkrusrcobcbjgwyvxcyqaem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192914.3678906-373-110745738813777/AnsiballZ_stat.py'
Jan 23 18:28:34 compute-0 sudo[38756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:34 compute-0 python3.9[38758]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:28:34 compute-0 sudo[38756]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:35 compute-0 sudo[38879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzrwifwmvlcgowyedzphnkmitmctklgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192914.3678906-373-110745738813777/AnsiballZ_copy.py'
Jan 23 18:28:35 compute-0 sudo[38879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:35 compute-0 python3.9[38881]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769192914.3678906-373-110745738813777/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:28:35 compute-0 sudo[38879]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:36 compute-0 sudo[39031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhyilqbhzmbtkunwegejhgfnzuhggiwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192915.7898564-391-273020732367979/AnsiballZ_dnf.py'
Jan 23 18:28:36 compute-0 sudo[39031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:36 compute-0 python3.9[39033]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:28:39 compute-0 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Jan 23 18:28:39 compute-0 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Jan 23 18:28:39 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 18:28:39 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 18:28:39 compute-0 systemd[1]: Reloading.
Jan 23 18:28:40 compute-0 systemd-rc-local-generator[39096]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:28:40 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 18:28:40 compute-0 sudo[39031]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:41 compute-0 python3.9[40388]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:28:42 compute-0 python3.9[41427]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 23 18:28:42 compute-0 python3.9[42238]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:28:43 compute-0 sudo[43105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acoykqhaeaawbzjnffooaxeppjchckgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192923.1984477-430-146525705486338/AnsiballZ_command.py'
Jan 23 18:28:43 compute-0 sudo[43105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:43 compute-0 python3.9[43129]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:28:43 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 23 18:28:43 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 18:28:43 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 18:28:43 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.648s CPU time.
Jan 23 18:28:43 compute-0 systemd[1]: run-r02c386feba57453986456cf78fd5e375.service: Deactivated successfully.
Jan 23 18:28:44 compute-0 systemd[1]: Starting Authorization Manager...
Jan 23 18:28:44 compute-0 polkitd[43413]: Started polkitd version 0.117
Jan 23 18:28:44 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 23 18:28:44 compute-0 polkitd[43413]: Loading rules from directory /etc/polkit-1/rules.d
Jan 23 18:28:44 compute-0 polkitd[43413]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 23 18:28:44 compute-0 polkitd[43413]: Finished loading, compiling and executing 2 rules
Jan 23 18:28:44 compute-0 systemd[1]: Started Authorization Manager.
Jan 23 18:28:44 compute-0 polkitd[43413]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 23 18:28:44 compute-0 sudo[43105]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:44 compute-0 sudo[43581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcuiqizcvzdpdvnpfplaiwfiwpdgnimf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192924.4609392-439-179510555502186/AnsiballZ_systemd.py'
Jan 23 18:28:44 compute-0 sudo[43581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:45 compute-0 python3.9[43583]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:28:45 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 23 18:28:45 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 23 18:28:45 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 23 18:28:45 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 23 18:28:45 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 23 18:28:45 compute-0 sudo[43581]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:46 compute-0 python3.9[43744]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 23 18:28:48 compute-0 sudo[43894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enfleojavwbidijqihmfkyizcpkuhirk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192927.9962854-496-96442073233009/AnsiballZ_systemd.py'
Jan 23 18:28:48 compute-0 sudo[43894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:48 compute-0 python3.9[43896]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:28:48 compute-0 systemd[1]: Reloading.
Jan 23 18:28:48 compute-0 systemd-rc-local-generator[43926]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:28:48 compute-0 sudo[43894]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:49 compute-0 sudo[44083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgfdkhgxkyzddbeqfckefefthogszvib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192929.0172093-496-260958388942151/AnsiballZ_systemd.py'
Jan 23 18:28:49 compute-0 sudo[44083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:49 compute-0 python3.9[44085]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:28:49 compute-0 systemd[1]: Reloading.
Jan 23 18:28:49 compute-0 systemd-rc-local-generator[44114]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:28:49 compute-0 sudo[44083]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:50 compute-0 sudo[44273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txklcnxydsewhcejmdsqihmanqsxddcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192930.094729-512-25868856399375/AnsiballZ_command.py'
Jan 23 18:28:50 compute-0 sudo[44273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:51 compute-0 python3.9[44275]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:28:51 compute-0 sudo[44273]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:51 compute-0 sudo[44426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rccpdnqkwzrltvikqnemjezuwudrutyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192931.6555882-520-134527036583211/AnsiballZ_command.py'
Jan 23 18:28:51 compute-0 sudo[44426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:52 compute-0 python3.9[44428]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:28:52 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 23 18:28:52 compute-0 sudo[44426]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:52 compute-0 sudo[44579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzhhblbxgoxegvumaaygdjhematibcjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192932.3491373-528-18526164413237/AnsiballZ_command.py'
Jan 23 18:28:52 compute-0 sudo[44579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:52 compute-0 python3.9[44581]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:28:54 compute-0 sudo[44579]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:54 compute-0 sudo[44741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhaqbmwvygtlxeoourrkosgxfizbrlvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192934.4717164-536-271345080074532/AnsiballZ_command.py'
Jan 23 18:28:54 compute-0 sudo[44741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:54 compute-0 python3.9[44743]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:28:54 compute-0 sudo[44741]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:55 compute-0 sudo[44894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knttrxzydtwzxqsyniefugereuifurme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192935.119399-544-218257421904098/AnsiballZ_systemd.py'
Jan 23 18:28:55 compute-0 sudo[44894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:28:55 compute-0 python3.9[44896]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 18:28:55 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 23 18:28:55 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 23 18:28:55 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 23 18:28:55 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 23 18:28:55 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 23 18:28:55 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 23 18:28:55 compute-0 sudo[44894]: pam_unix(sudo:session): session closed for user root
Jan 23 18:28:56 compute-0 sshd-session[31272]: Connection closed by 192.168.122.30 port 43376
Jan 23 18:28:56 compute-0 sshd-session[31269]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:28:56 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 23 18:28:56 compute-0 systemd[1]: session-8.scope: Consumed 2min 20.186s CPU time.
Jan 23 18:28:56 compute-0 systemd-logind[792]: Session 8 logged out. Waiting for processes to exit.
Jan 23 18:28:56 compute-0 systemd-logind[792]: Removed session 8.
Jan 23 18:29:01 compute-0 sshd-session[44927]: Accepted publickey for zuul from 192.168.122.30 port 59424 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:29:01 compute-0 systemd-logind[792]: New session 9 of user zuul.
Jan 23 18:29:01 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 23 18:29:01 compute-0 sshd-session[44927]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:29:02 compute-0 python3.9[45080]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:29:03 compute-0 sudo[45234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fomxhpjvfngjdeohkbdbenozinstofnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192943.2977006-31-120724752694010/AnsiballZ_getent.py'
Jan 23 18:29:03 compute-0 sudo[45234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:03 compute-0 python3.9[45236]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 23 18:29:03 compute-0 sudo[45234]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:04 compute-0 sudo[45387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwqqugmayqrhuuqhvizecrnayrpwfagl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192944.0802307-39-208028014630118/AnsiballZ_group.py'
Jan 23 18:29:04 compute-0 sudo[45387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:04 compute-0 python3.9[45389]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 18:29:04 compute-0 groupadd[45390]: group added to /etc/group: name=openvswitch, GID=42476
Jan 23 18:29:04 compute-0 groupadd[45390]: group added to /etc/gshadow: name=openvswitch
Jan 23 18:29:04 compute-0 groupadd[45390]: new group: name=openvswitch, GID=42476
Jan 23 18:29:04 compute-0 sudo[45387]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:05 compute-0 sudo[45545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkaqcdbqxkvheubutejzxsdacucyyfii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192944.981096-47-114753034111567/AnsiballZ_user.py'
Jan 23 18:29:05 compute-0 sudo[45545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:05 compute-0 python3.9[45547]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 23 18:29:05 compute-0 useradd[45549]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 23 18:29:05 compute-0 useradd[45549]: add 'openvswitch' to group 'hugetlbfs'
Jan 23 18:29:05 compute-0 useradd[45549]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 23 18:29:05 compute-0 sudo[45545]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:06 compute-0 sudo[45705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqsutsnmznfefnutthqwspkkdugffctc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192946.0386076-57-209181810124361/AnsiballZ_setup.py'
Jan 23 18:29:06 compute-0 sudo[45705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:06 compute-0 python3.9[45707]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:29:06 compute-0 sudo[45705]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:07 compute-0 sudo[45789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmsvinfhgzqbpgjfpqgurdpdxudoikjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192946.0386076-57-209181810124361/AnsiballZ_dnf.py'
Jan 23 18:29:07 compute-0 sudo[45789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:07 compute-0 python3.9[45791]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 18:29:09 compute-0 sudo[45789]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:10 compute-0 sudo[45953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mozsmrfqgolrmgjmxycckurhoaiirkrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192950.0553448-71-152243048354348/AnsiballZ_dnf.py'
Jan 23 18:29:10 compute-0 sudo[45953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:10 compute-0 python3.9[45955]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:29:22 compute-0 kernel: SELinux:  Converting 2736 SID table entries...
Jan 23 18:29:22 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 18:29:22 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 23 18:29:22 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 18:29:22 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 23 18:29:22 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 18:29:22 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 18:29:22 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 18:29:23 compute-0 groupadd[45978]: group added to /etc/group: name=unbound, GID=994
Jan 23 18:29:23 compute-0 groupadd[45978]: group added to /etc/gshadow: name=unbound
Jan 23 18:29:23 compute-0 groupadd[45978]: new group: name=unbound, GID=994
Jan 23 18:29:23 compute-0 useradd[45985]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 23 18:29:23 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 23 18:29:23 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 23 18:29:24 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 18:29:24 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 18:29:24 compute-0 systemd[1]: Reloading.
Jan 23 18:29:24 compute-0 systemd-rc-local-generator[46481]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:29:24 compute-0 systemd-sysv-generator[46487]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:29:24 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 18:29:25 compute-0 sudo[45953]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:25 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 18:29:25 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 18:29:25 compute-0 systemd[1]: run-r272fc544012c41439fad47607e7f75f9.service: Deactivated successfully.
Jan 23 18:29:25 compute-0 sudo[47051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikvsoftknxnzxgnflgbfhabvgifnuxvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192965.4159935-79-121065216718799/AnsiballZ_systemd.py'
Jan 23 18:29:25 compute-0 sudo[47051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:26 compute-0 python3.9[47053]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 18:29:26 compute-0 systemd[1]: Reloading.
Jan 23 18:29:26 compute-0 systemd-sysv-generator[47086]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:29:26 compute-0 systemd-rc-local-generator[47083]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:29:26 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 23 18:29:26 compute-0 chown[47094]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 23 18:29:26 compute-0 ovs-ctl[47099]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 23 18:29:26 compute-0 ovs-ctl[47099]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 23 18:29:26 compute-0 ovs-ctl[47099]: Starting ovsdb-server [  OK  ]
Jan 23 18:29:26 compute-0 ovs-vsctl[47148]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 23 18:29:26 compute-0 ovs-vsctl[47168]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"d273cd83-f2f8-4b91-b18f-957051ff2a6c\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 23 18:29:26 compute-0 ovs-ctl[47099]: Configuring Open vSwitch system IDs [  OK  ]
Jan 23 18:29:26 compute-0 ovs-vsctl[47174]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 23 18:29:26 compute-0 ovs-ctl[47099]: Enabling remote OVSDB managers [  OK  ]
Jan 23 18:29:26 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 23 18:29:26 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 23 18:29:26 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 23 18:29:27 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 23 18:29:27 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 23 18:29:27 compute-0 ovs-ctl[47219]: Inserting openvswitch module [  OK  ]
Jan 23 18:29:27 compute-0 ovs-ctl[47188]: Starting ovs-vswitchd [  OK  ]
Jan 23 18:29:27 compute-0 ovs-vsctl[47236]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 23 18:29:27 compute-0 ovs-ctl[47188]: Enabling remote OVSDB managers [  OK  ]
Jan 23 18:29:27 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 23 18:29:27 compute-0 systemd[1]: Starting Open vSwitch...
Jan 23 18:29:27 compute-0 systemd[1]: Finished Open vSwitch.
Jan 23 18:29:27 compute-0 sudo[47051]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:28 compute-0 python3.9[47388]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:29:28 compute-0 sudo[47538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhodccihydwriarqnyjtixdlkbbwgvqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192968.2220151-97-18218508918/AnsiballZ_sefcontext.py'
Jan 23 18:29:28 compute-0 sudo[47538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:28 compute-0 python3.9[47540]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 23 18:29:30 compute-0 kernel: SELinux:  Converting 2750 SID table entries...
Jan 23 18:29:30 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 18:29:30 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 23 18:29:30 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 18:29:30 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 23 18:29:30 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 18:29:30 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 18:29:30 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 18:29:30 compute-0 sudo[47538]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:31 compute-0 python3.9[47695]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:29:32 compute-0 sudo[47851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzxkcmsnpncbnaslkesaglozhtalpisx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192971.7761443-115-91153080641989/AnsiballZ_dnf.py'
Jan 23 18:29:32 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 23 18:29:32 compute-0 sudo[47851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:32 compute-0 python3.9[47853]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:29:33 compute-0 sudo[47851]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:34 compute-0 sudo[48004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnfdjklveecpafketbypouodtmgqphis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192973.8098621-123-87831544149445/AnsiballZ_command.py'
Jan 23 18:29:34 compute-0 sudo[48004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:34 compute-0 python3.9[48006]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:29:35 compute-0 sudo[48004]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:35 compute-0 sudo[48291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcnjbxmiddjezrrxfyffnxkktmteenap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192975.348681-131-163171621200941/AnsiballZ_file.py'
Jan 23 18:29:35 compute-0 sudo[48291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:35 compute-0 python3.9[48293]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 23 18:29:35 compute-0 sudo[48291]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:36 compute-0 python3.9[48443]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:29:37 compute-0 sudo[48595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxglvlkleyzzerbnllqdyaifppxdjxao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192976.943057-147-83537825550488/AnsiballZ_dnf.py'
Jan 23 18:29:37 compute-0 sudo[48595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:37 compute-0 python3.9[48597]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:29:39 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 18:29:39 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 18:29:39 compute-0 systemd[1]: Reloading.
Jan 23 18:29:39 compute-0 systemd-rc-local-generator[48636]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:29:39 compute-0 systemd-sysv-generator[48639]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:29:40 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 18:29:40 compute-0 sudo[48595]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:40 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 18:29:40 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 18:29:40 compute-0 systemd[1]: run-rfae1bd4310a24a3b83e52978f9caefec.service: Deactivated successfully.
Jan 23 18:29:41 compute-0 sudo[48911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgflwphdzkdateakxsbwfgukzlbsjdwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192980.7183557-155-114436710034703/AnsiballZ_systemd.py'
Jan 23 18:29:41 compute-0 sudo[48911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:41 compute-0 python3.9[48913]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 18:29:41 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 23 18:29:41 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 23 18:29:41 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 23 18:29:41 compute-0 NetworkManager[7193]: <info>  [1769192981.3449] caught SIGTERM, shutting down normally.
Jan 23 18:29:41 compute-0 systemd[1]: Stopping Network Manager...
Jan 23 18:29:41 compute-0 NetworkManager[7193]: <info>  [1769192981.3462] dhcp4 (eth0): canceled DHCP transaction
Jan 23 18:29:41 compute-0 NetworkManager[7193]: <info>  [1769192981.3462] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 18:29:41 compute-0 NetworkManager[7193]: <info>  [1769192981.3462] dhcp4 (eth0): state changed no lease
Jan 23 18:29:41 compute-0 NetworkManager[7193]: <info>  [1769192981.3464] manager: NetworkManager state is now CONNECTED_SITE
Jan 23 18:29:41 compute-0 NetworkManager[7193]: <info>  [1769192981.3531] exiting (success)
Jan 23 18:29:41 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 18:29:41 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 18:29:41 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 23 18:29:41 compute-0 systemd[1]: Stopped Network Manager.
Jan 23 18:29:41 compute-0 systemd[1]: NetworkManager.service: Consumed 11.234s CPU time, 4.1M memory peak, read 0B from disk, written 20.0K to disk.
Jan 23 18:29:41 compute-0 systemd[1]: Starting Network Manager...
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.4377] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:6f3b892c-d29e-4c61-8f43-845fc2f2e50a)
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.4378] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.4434] manager[0x55d66150c000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 23 18:29:41 compute-0 systemd[1]: Starting Hostname Service...
Jan 23 18:29:41 compute-0 systemd[1]: Started Hostname Service.
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5314] hostname: hostname: using hostnamed
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5314] hostname: static hostname changed from (none) to "compute-0"
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5319] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5324] manager[0x55d66150c000]: rfkill: Wi-Fi hardware radio set enabled
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5324] manager[0x55d66150c000]: rfkill: WWAN hardware radio set enabled
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5342] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5350] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5350] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5351] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5351] manager: Networking is enabled by state file
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5354] settings: Loaded settings plugin: keyfile (internal)
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5357] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5388] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5397] dhcp: init: Using DHCP client 'internal'
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5400] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5404] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5408] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5414] device (lo): Activation: starting connection 'lo' (8f9a105d-dccb-4e29-a947-e22575a1eaec)
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5419] device (eth0): carrier: link connected
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5423] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5428] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5429] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5435] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5441] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5446] device (eth1): carrier: link connected
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5449] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5453] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (dbab0fcc-2806-5bf7-af25-c9e9c0b22e49) (indicated)
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5453] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5458] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5464] device (eth1): Activation: starting connection 'ci-private-network' (dbab0fcc-2806-5bf7-af25-c9e9c0b22e49)
Jan 23 18:29:41 compute-0 systemd[1]: Started Network Manager.
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5471] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5478] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5482] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5487] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5490] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5493] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5496] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5499] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5503] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5509] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5513] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5522] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5535] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5542] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5545] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5551] device (lo): Activation: successful, device activated.
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5558] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5561] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5564] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5567] device (eth1): Activation: successful, device activated.
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5574] dhcp4 (eth0): state changed new lease, address=38.129.56.100
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5582] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 23 18:29:41 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5659] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5676] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5678] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5683] manager: NetworkManager state is now CONNECTED_SITE
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5686] device (eth0): Activation: successful, device activated.
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5691] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 23 18:29:41 compute-0 NetworkManager[48925]: <info>  [1769192981.5694] manager: startup complete
Jan 23 18:29:41 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 23 18:29:41 compute-0 sudo[48911]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:42 compute-0 sudo[49137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smwyxltxtqiabrvnlmulogwhnbvgoeus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192981.7735374-163-53274638360608/AnsiballZ_dnf.py'
Jan 23 18:29:42 compute-0 sudo[49137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:42 compute-0 python3.9[49139]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:29:47 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 18:29:47 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 18:29:47 compute-0 systemd[1]: Reloading.
Jan 23 18:29:47 compute-0 systemd-rc-local-generator[49192]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:29:47 compute-0 systemd-sysv-generator[49195]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:29:47 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 18:29:48 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 18:29:48 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 18:29:48 compute-0 systemd[1]: run-r1444da106c3b48ca9c5f55ac158ba3a8.service: Deactivated successfully.
Jan 23 18:29:48 compute-0 sudo[49137]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:49 compute-0 sudo[49596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqyhubbzbvdbrhateadmjsnqqesgbkyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192989.0504277-175-244693803063866/AnsiballZ_stat.py'
Jan 23 18:29:49 compute-0 sudo[49596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:49 compute-0 python3.9[49598]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:29:49 compute-0 sudo[49596]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:50 compute-0 sudo[49748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfvksvgeszvkxwghwqpmrqsfmotmbkwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192989.803956-184-161339385830369/AnsiballZ_ini_file.py'
Jan 23 18:29:50 compute-0 sudo[49748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:50 compute-0 python3.9[49750]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:29:50 compute-0 sudo[49748]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:51 compute-0 sudo[49902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuftndobtlnzrhiuzamsybkkyqbefyoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192990.757118-194-6043522456128/AnsiballZ_ini_file.py'
Jan 23 18:29:51 compute-0 sudo[49902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:51 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 18:29:51 compute-0 python3.9[49904]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:29:51 compute-0 sudo[49902]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:52 compute-0 sudo[50054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyzwetzgqnbhowmgxvzcgdmczwonyjtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192991.889701-194-135653501980320/AnsiballZ_ini_file.py'
Jan 23 18:29:52 compute-0 sudo[50054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:52 compute-0 python3.9[50056]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:29:52 compute-0 sudo[50054]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:52 compute-0 sudo[50206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sonrvmuzajyfaqqtmqjoombxjpjjsxru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192992.492624-209-107273552272283/AnsiballZ_ini_file.py'
Jan 23 18:29:52 compute-0 sudo[50206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:52 compute-0 python3.9[50208]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:29:52 compute-0 sudo[50206]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:53 compute-0 sudo[50358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geipmuvthazmpyzrmrcctjhrhbrehqhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192993.1067727-209-111959538210436/AnsiballZ_ini_file.py'
Jan 23 18:29:53 compute-0 sudo[50358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:53 compute-0 python3.9[50360]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:29:53 compute-0 sudo[50358]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:54 compute-0 sudo[50510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjawgwggkbbvwckigpmgkrtttrcdcbsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192993.7847786-224-23909478905342/AnsiballZ_stat.py'
Jan 23 18:29:54 compute-0 sudo[50510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:54 compute-0 python3.9[50512]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:29:54 compute-0 sudo[50510]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:54 compute-0 sudo[50633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbdzipnnyjqxiryoqpybbuqwuxnpgkkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192993.7847786-224-23909478905342/AnsiballZ_copy.py'
Jan 23 18:29:54 compute-0 sudo[50633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:54 compute-0 python3.9[50635]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769192993.7847786-224-23909478905342/.source _original_basename=.kil2sj1f follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:29:54 compute-0 sudo[50633]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:55 compute-0 sudo[50785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rywqtyscnxiurkexuvtvdqrauqlxsgyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192995.0873816-239-146284305215676/AnsiballZ_file.py'
Jan 23 18:29:55 compute-0 sudo[50785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:55 compute-0 python3.9[50787]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:29:55 compute-0 sudo[50785]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:56 compute-0 sudo[50937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hecnanyjrzhmlgrkznhugctjzivuware ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192995.722343-247-240740139420115/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 23 18:29:56 compute-0 sudo[50937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:56 compute-0 python3.9[50939]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 23 18:29:56 compute-0 sudo[50937]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:56 compute-0 sudo[51089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndashytmuidyqsizyqhlernnpuusbyzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192996.5732043-256-42903776215200/AnsiballZ_file.py'
Jan 23 18:29:56 compute-0 sudo[51089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:57 compute-0 python3.9[51091]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:29:57 compute-0 sudo[51089]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:57 compute-0 sudo[51241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbzqdorhcplmntabwhrgzofvviyfuxce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192997.4106264-266-30617920644069/AnsiballZ_stat.py'
Jan 23 18:29:57 compute-0 sudo[51241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:57 compute-0 sudo[51241]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:58 compute-0 sudo[51364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugvfpkqmbcwdghnxqrgtnbsygntxmpqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192997.4106264-266-30617920644069/AnsiballZ_copy.py'
Jan 23 18:29:58 compute-0 sudo[51364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:58 compute-0 sudo[51364]: pam_unix(sudo:session): session closed for user root
Jan 23 18:29:59 compute-0 sudo[51516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qycpufaqzppptnrwhhhhqaynhxiiials ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192998.656047-281-66668345782132/AnsiballZ_slurp.py'
Jan 23 18:29:59 compute-0 sudo[51516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:29:59 compute-0 python3.9[51518]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 23 18:29:59 compute-0 sudo[51516]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:00 compute-0 sudo[51691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfgsygpfdzwhcruopjptpcbaizqnzzer ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769192999.5065045-290-20819233833393/async_wrapper.py j832166267849 300 /home/zuul/.ansible/tmp/ansible-tmp-1769192999.5065045-290-20819233833393/AnsiballZ_edpm_os_net_config.py _'
Jan 23 18:30:00 compute-0 sudo[51691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:00 compute-0 ansible-async_wrapper.py[51693]: Invoked with j832166267849 300 /home/zuul/.ansible/tmp/ansible-tmp-1769192999.5065045-290-20819233833393/AnsiballZ_edpm_os_net_config.py _
Jan 23 18:30:00 compute-0 ansible-async_wrapper.py[51696]: Starting module and watcher
Jan 23 18:30:00 compute-0 ansible-async_wrapper.py[51696]: Start watching 51697 (300)
Jan 23 18:30:00 compute-0 ansible-async_wrapper.py[51697]: Start module (51697)
Jan 23 18:30:00 compute-0 ansible-async_wrapper.py[51693]: Return async_wrapper task started.
Jan 23 18:30:00 compute-0 sudo[51691]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:00 compute-0 python3.9[51698]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 23 18:30:01 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 23 18:30:01 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 23 18:30:01 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 23 18:30:01 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 23 18:30:01 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.5598] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.5650] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6410] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6411] audit: op="connection-add" uuid="05d7ed96-9a15-46ad-87f1-9f8dda6bcd30" name="br-ex-br" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6432] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6433] audit: op="connection-add" uuid="0d97adef-0173-4d04-bb1c-72e5657824e4" name="br-ex-port" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6445] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6446] audit: op="connection-add" uuid="a7176fda-aaea-4e62-8422-49e6c8ed2ee6" name="eth1-port" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6458] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6459] audit: op="connection-add" uuid="6d29af67-ad5d-4b6f-affa-e6f3462fa8ce" name="vlan20-port" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6472] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6473] audit: op="connection-add" uuid="50381039-a748-417e-93ff-01574a02adda" name="vlan21-port" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6486] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6487] audit: op="connection-add" uuid="790e8318-cd2e-49fc-8000-dbe0e4952a56" name="vlan22-port" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6497] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6499] audit: op="connection-add" uuid="48d3197e-6971-4871-8938-640fcbe67694" name="vlan23-port" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6520] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6536] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6538] audit: op="connection-add" uuid="8ab505ed-4249-4e8d-91fb-87a12c79d64d" name="br-ex-if" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6621] audit: op="connection-update" uuid="dbab0fcc-2806-5bf7-af25-c9e9c0b22e49" name="ci-private-network" args="ovs-external-ids.data,ovs-interface.type,ipv6.dns,ipv6.routes,ipv6.addr-gen-mode,ipv6.method,ipv6.addresses,ipv6.routing-rules,connection.slave-type,connection.port-type,connection.master,connection.timestamp,connection.controller,ipv4.dns,ipv4.routes,ipv4.method,ipv4.addresses,ipv4.never-default,ipv4.routing-rules" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6640] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6642] audit: op="connection-add" uuid="10acd242-b075-4553-bc77-fcc0f83f9355" name="vlan20-if" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6670] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6672] audit: op="connection-add" uuid="c2670a92-1190-4dc4-bf3b-9f4093abeab4" name="vlan21-if" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6689] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6690] audit: op="connection-add" uuid="83671303-ecd9-4201-8379-2404e9296af6" name="vlan22-if" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6706] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6707] audit: op="connection-add" uuid="1a11d950-fce8-4919-b33a-bd91ba57226c" name="vlan23-if" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6721] audit: op="connection-delete" uuid="bc036e08-846c-34c0-9a7c-cba0a7407da5" name="Wired connection 1" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6735] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <warn>  [1769193002.6737] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6742] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6745] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (05d7ed96-9a15-46ad-87f1-9f8dda6bcd30)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6746] audit: op="connection-activate" uuid="05d7ed96-9a15-46ad-87f1-9f8dda6bcd30" name="br-ex-br" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6747] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <warn>  [1769193002.6748] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6752] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6755] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (0d97adef-0173-4d04-bb1c-72e5657824e4)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6757] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <warn>  [1769193002.6758] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6762] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6766] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (a7176fda-aaea-4e62-8422-49e6c8ed2ee6)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6768] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <warn>  [1769193002.6769] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6774] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6778] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (6d29af67-ad5d-4b6f-affa-e6f3462fa8ce)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6780] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <warn>  [1769193002.6781] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6786] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6791] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (50381039-a748-417e-93ff-01574a02adda)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6793] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <warn>  [1769193002.6794] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6799] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6805] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (790e8318-cd2e-49fc-8000-dbe0e4952a56)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6808] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <warn>  [1769193002.6809] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6814] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6819] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (48d3197e-6971-4871-8938-640fcbe67694)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6820] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6821] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6823] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6828] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <warn>  [1769193002.6829] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6831] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6834] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (8ab505ed-4249-4e8d-91fb-87a12c79d64d)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6834] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6837] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6838] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6839] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6840] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6847] device (eth1): disconnecting for new activation request.
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6848] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6850] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6851] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6852] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6854] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <warn>  [1769193002.6855] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6857] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6859] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (10acd242-b075-4553-bc77-fcc0f83f9355)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6860] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6862] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6863] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6864] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6866] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <warn>  [1769193002.6866] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6868] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6871] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (c2670a92-1190-4dc4-bf3b-9f4093abeab4)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6872] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6874] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6875] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6876] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6877] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <warn>  [1769193002.6878] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6880] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6883] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (83671303-ecd9-4201-8379-2404e9296af6)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6883] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6885] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6886] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6887] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6889] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <warn>  [1769193002.6890] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6892] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6894] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (1a11d950-fce8-4919-b33a-bd91ba57226c)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6895] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6897] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6898] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6899] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6900] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6910] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6911] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6913] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6914] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6920] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6922] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6925] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6928] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6929] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6935] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6938] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6951] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6952] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6957] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6960] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 kernel: Timeout policy base is empty
Jan 23 18:30:02 compute-0 systemd-udevd[51704]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6970] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6971] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6975] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6977] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6980] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6981] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6984] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6990] dhcp4 (eth0): canceled DHCP transaction
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6990] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6990] dhcp4 (eth0): state changed no lease
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.6993] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7007] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7010] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51699 uid=0 result="fail" reason="Device is not activated"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7016] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 23 18:30:02 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7061] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7069] device (eth1): disconnecting for new activation request.
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7071] audit: op="connection-activate" uuid="dbab0fcc-2806-5bf7-af25-c9e9c0b22e49" name="ci-private-network" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7072] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7078] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7092] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51699 uid=0 result="success"
Jan 23 18:30:02 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7195] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 23 18:30:02 compute-0 kernel: br-ex: entered promiscuous mode
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7344] device (eth1): Activation: starting connection 'ci-private-network' (dbab0fcc-2806-5bf7-af25-c9e9c0b22e49)
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7349] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7360] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7363] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7370] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7373] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7379] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7380] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7381] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7382] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7383] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7384] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7396] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7402] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7405] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7407] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7414] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 18:30:02 compute-0 kernel: vlan22: entered promiscuous mode
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7417] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7420] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7422] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7425] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7428] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7431] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 18:30:02 compute-0 systemd-udevd[51705]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7434] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7437] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7443] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7448] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7454] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7472] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 kernel: vlan21: entered promiscuous mode
Jan 23 18:30:02 compute-0 kernel: vlan20: entered promiscuous mode
Jan 23 18:30:02 compute-0 kernel: vlan23: entered promiscuous mode
Jan 23 18:30:02 compute-0 systemd-udevd[51703]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 18:30:02 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7815] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7829] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7838] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7845] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7847] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7849] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7860] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7869] device (eth1): Activation: successful, device activated.
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7873] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7880] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7922] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7930] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7940] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7948] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7973] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7976] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7978] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7979] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7984] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7991] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.7998] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.8005] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.8011] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.8019] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.8026] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 18:30:02 compute-0 NetworkManager[48925]: <info>  [1769193002.8034] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 18:30:03 compute-0 NetworkManager[48925]: <info>  [1769193003.8268] dhcp4 (eth0): state changed new lease, address=38.129.56.100
Jan 23 18:30:03 compute-0 NetworkManager[48925]: <info>  [1769193003.9560] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51699 uid=0 result="success"
Jan 23 18:30:03 compute-0 sudo[52054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uatklytbgqbwrldvbyqvvppwdfjaaaqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193003.5261476-290-16095394350245/AnsiballZ_async_status.py'
Jan 23 18:30:03 compute-0 sudo[52054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:04 compute-0 NetworkManager[48925]: <info>  [1769193004.1320] checkpoint[0x55d6614e1950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 23 18:30:04 compute-0 NetworkManager[48925]: <info>  [1769193004.1323] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51699 uid=0 result="success"
Jan 23 18:30:04 compute-0 python3.9[52056]: ansible-ansible.legacy.async_status Invoked with jid=j832166267849.51693 mode=status _async_dir=/root/.ansible_async
Jan 23 18:30:04 compute-0 sudo[52054]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:04 compute-0 NetworkManager[48925]: <info>  [1769193004.4452] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51699 uid=0 result="success"
Jan 23 18:30:04 compute-0 NetworkManager[48925]: <info>  [1769193004.4473] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51699 uid=0 result="success"
Jan 23 18:30:04 compute-0 NetworkManager[48925]: <info>  [1769193004.6639] audit: op="networking-control" arg="global-dns-configuration" pid=51699 uid=0 result="success"
Jan 23 18:30:04 compute-0 NetworkManager[48925]: <info>  [1769193004.6672] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 23 18:30:04 compute-0 NetworkManager[48925]: <info>  [1769193004.6712] audit: op="networking-control" arg="global-dns-configuration" pid=51699 uid=0 result="success"
Jan 23 18:30:04 compute-0 NetworkManager[48925]: <info>  [1769193004.6737] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51699 uid=0 result="success"
Jan 23 18:30:04 compute-0 NetworkManager[48925]: <info>  [1769193004.8359] checkpoint[0x55d6614e1a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 23 18:30:04 compute-0 NetworkManager[48925]: <info>  [1769193004.8363] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51699 uid=0 result="success"
Jan 23 18:30:04 compute-0 ansible-async_wrapper.py[51697]: Module complete (51697)
Jan 23 18:30:05 compute-0 ansible-async_wrapper.py[51696]: Done in kid B.
Jan 23 18:30:07 compute-0 sudo[52160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fznjterwllpzppuhdyxbaqejyjpjvwte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193003.5261476-290-16095394350245/AnsiballZ_async_status.py'
Jan 23 18:30:07 compute-0 sudo[52160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:07 compute-0 python3.9[52162]: ansible-ansible.legacy.async_status Invoked with jid=j832166267849.51693 mode=status _async_dir=/root/.ansible_async
Jan 23 18:30:07 compute-0 sudo[52160]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:08 compute-0 sudo[52260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiugwyvxvcvzqospgjvroswtbrxwogqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193003.5261476-290-16095394350245/AnsiballZ_async_status.py'
Jan 23 18:30:08 compute-0 sudo[52260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:08 compute-0 python3.9[52262]: ansible-ansible.legacy.async_status Invoked with jid=j832166267849.51693 mode=cleanup _async_dir=/root/.ansible_async
Jan 23 18:30:08 compute-0 sudo[52260]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:08 compute-0 sudo[52412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ratagqyimfuvfchtbfcuqzdzoebpuyhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193008.4407687-317-161693081704026/AnsiballZ_stat.py'
Jan 23 18:30:08 compute-0 sudo[52412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:08 compute-0 python3.9[52414]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:30:08 compute-0 sudo[52412]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:09 compute-0 sudo[52535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tajtrohuqgclmvkglxqyjckpdxecpvyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193008.4407687-317-161693081704026/AnsiballZ_copy.py'
Jan 23 18:30:09 compute-0 sudo[52535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:09 compute-0 python3.9[52537]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769193008.4407687-317-161693081704026/.source.returncode _original_basename=.jk3ebaco follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:30:09 compute-0 sudo[52535]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:09 compute-0 sudo[52687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idwasdwwnwiuqypfwxznwsfdbavhdcru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193009.7111685-333-153032281970861/AnsiballZ_stat.py'
Jan 23 18:30:09 compute-0 sudo[52687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:10 compute-0 python3.9[52689]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:30:10 compute-0 sudo[52687]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:10 compute-0 sudo[52810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pszefutkavgeshpoilfwasvfizluxdpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193009.7111685-333-153032281970861/AnsiballZ_copy.py'
Jan 23 18:30:10 compute-0 sudo[52810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:10 compute-0 python3.9[52812]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769193009.7111685-333-153032281970861/.source.cfg _original_basename=.b0hfqklw follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:30:10 compute-0 sudo[52810]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:11 compute-0 sudo[52963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndfthcjvjvnlewxjlsmdhtfpdimqgyed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193010.9246824-348-71949679740210/AnsiballZ_systemd.py'
Jan 23 18:30:11 compute-0 sudo[52963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:11 compute-0 python3.9[52965]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 18:30:11 compute-0 systemd[1]: Reloading Network Manager...
Jan 23 18:30:11 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 18:30:11 compute-0 NetworkManager[48925]: <info>  [1769193011.5842] audit: op="reload" arg="0" pid=52969 uid=0 result="success"
Jan 23 18:30:11 compute-0 NetworkManager[48925]: <info>  [1769193011.5852] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 23 18:30:11 compute-0 systemd[1]: Reloaded Network Manager.
Jan 23 18:30:11 compute-0 sudo[52963]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:11 compute-0 sshd-session[44930]: Connection closed by 192.168.122.30 port 59424
Jan 23 18:30:11 compute-0 sshd-session[44927]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:30:11 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 23 18:30:11 compute-0 systemd[1]: session-9.scope: Consumed 50.195s CPU time.
Jan 23 18:30:11 compute-0 systemd-logind[792]: Session 9 logged out. Waiting for processes to exit.
Jan 23 18:30:11 compute-0 systemd-logind[792]: Removed session 9.
Jan 23 18:30:17 compute-0 sshd-session[53002]: Accepted publickey for zuul from 192.168.122.30 port 59826 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:30:17 compute-0 systemd-logind[792]: New session 10 of user zuul.
Jan 23 18:30:17 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 23 18:30:17 compute-0 sshd-session[53002]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:30:18 compute-0 python3.9[53155]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:30:19 compute-0 python3.9[53309]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:30:20 compute-0 python3.9[53503]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:30:21 compute-0 sshd-session[53005]: Connection closed by 192.168.122.30 port 59826
Jan 23 18:30:21 compute-0 sshd-session[53002]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:30:21 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 23 18:30:21 compute-0 systemd[1]: session-10.scope: Consumed 2.365s CPU time.
Jan 23 18:30:21 compute-0 systemd-logind[792]: Session 10 logged out. Waiting for processes to exit.
Jan 23 18:30:21 compute-0 systemd-logind[792]: Removed session 10.
Jan 23 18:30:21 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 18:30:26 compute-0 sshd-session[53531]: Accepted publickey for zuul from 192.168.122.30 port 43728 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:30:26 compute-0 systemd-logind[792]: New session 11 of user zuul.
Jan 23 18:30:26 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 23 18:30:26 compute-0 sshd-session[53531]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:30:27 compute-0 python3.9[53685]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:30:28 compute-0 python3.9[53839]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:30:29 compute-0 sudo[53993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwrnndloxwctmnskeyojgdhmyfstxfat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193029.07214-35-170319157415773/AnsiballZ_setup.py'
Jan 23 18:30:29 compute-0 sudo[53993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:29 compute-0 python3.9[53995]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:30:29 compute-0 sudo[53993]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:30 compute-0 sudo[54078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcocblbtnatrbshqounoiaczytxlvnls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193029.07214-35-170319157415773/AnsiballZ_dnf.py'
Jan 23 18:30:30 compute-0 sudo[54078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:30 compute-0 python3.9[54080]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:30:31 compute-0 sudo[54078]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:32 compute-0 sudo[54231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imswrydnsundycovqquvktoyxetqajcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193032.0403059-47-118024676446581/AnsiballZ_setup.py'
Jan 23 18:30:32 compute-0 sudo[54231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:32 compute-0 python3.9[54233]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:30:33 compute-0 sudo[54231]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:33 compute-0 sudo[54427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmqjjlywyuztyvtckdlddeehzmdwlmvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193033.2531865-58-243386208974176/AnsiballZ_file.py'
Jan 23 18:30:33 compute-0 sudo[54427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:33 compute-0 python3.9[54429]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:30:33 compute-0 sudo[54427]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:34 compute-0 sudo[54579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jksmsrdeoarrxqykydzqanjbwvykpajc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193034.060944-66-250585270939175/AnsiballZ_command.py'
Jan 23 18:30:34 compute-0 sudo[54579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:34 compute-0 python3.9[54581]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:30:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2173014470-merged.mount: Deactivated successfully.
Jan 23 18:30:34 compute-0 podman[54582]: 2026-01-23 18:30:34.916489468 +0000 UTC m=+0.214163855 system refresh
Jan 23 18:30:34 compute-0 sudo[54579]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:35 compute-0 sudo[54740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnpsnazgiortptmzcenioduklvfaetto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193035.1053255-74-45828512512412/AnsiballZ_stat.py'
Jan 23 18:30:35 compute-0 sudo[54740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:35 compute-0 python3.9[54742]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:30:35 compute-0 sudo[54740]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 18:30:36 compute-0 sudo[54863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmposyggpjpqcjcxxwtbutinynjhbeny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193035.1053255-74-45828512512412/AnsiballZ_copy.py'
Jan 23 18:30:36 compute-0 sudo[54863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:36 compute-0 python3.9[54865]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193035.1053255-74-45828512512412/.source.json follow=False _original_basename=podman_network_config.j2 checksum=87a4711dd765d9316278a708d54a7b90663428dd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:30:36 compute-0 sudo[54863]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:36 compute-0 sudo[55015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpnqjryaiuihwndehqouppzqogturcnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193036.5882242-89-273270773100600/AnsiballZ_stat.py'
Jan 23 18:30:36 compute-0 sudo[55015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:37 compute-0 python3.9[55017]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:30:37 compute-0 sudo[55015]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:37 compute-0 sudo[55138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yercqlyupskrvjvgsyohvjjrirqsxmqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193036.5882242-89-273270773100600/AnsiballZ_copy.py'
Jan 23 18:30:37 compute-0 sudo[55138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:37 compute-0 python3.9[55140]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769193036.5882242-89-273270773100600/.source.conf follow=False _original_basename=registries.conf.j2 checksum=72ddb04219a06ecdf6a11ec2551ff8d0679beaac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:30:37 compute-0 sudo[55138]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:38 compute-0 sudo[55290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkiogoevynkokeeoycutvhvapxhzbhlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193037.847158-105-138256056103938/AnsiballZ_ini_file.py'
Jan 23 18:30:38 compute-0 sudo[55290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:38 compute-0 python3.9[55292]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:30:38 compute-0 sudo[55290]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:38 compute-0 sudo[55442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcipdcwguctgebewrzbtcdgiwanuegha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193038.6020792-105-235342275379518/AnsiballZ_ini_file.py'
Jan 23 18:30:38 compute-0 sudo[55442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:39 compute-0 python3.9[55444]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:30:39 compute-0 sudo[55442]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:39 compute-0 sudo[55594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koqcwnxuzdwntebnxowdcswnzvvluyzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193039.2329843-105-133766545235943/AnsiballZ_ini_file.py'
Jan 23 18:30:39 compute-0 sudo[55594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:39 compute-0 python3.9[55596]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:30:39 compute-0 sudo[55594]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:40 compute-0 sudo[55746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgmbglyozvaaoluynftxkrlxpjsypext ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193039.8208103-105-196125530491491/AnsiballZ_ini_file.py'
Jan 23 18:30:40 compute-0 sudo[55746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:40 compute-0 python3.9[55748]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:30:40 compute-0 sudo[55746]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:40 compute-0 sudo[55898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltxiljddkzcbfwmzijkwsqrffbsduyfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193040.5418909-136-46149616677378/AnsiballZ_dnf.py'
Jan 23 18:30:40 compute-0 sudo[55898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:41 compute-0 python3.9[55900]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:30:42 compute-0 sudo[55898]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:43 compute-0 sudo[56051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyufzxdgyrhgkqbaovzzfbdlecjdrwyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193042.835868-147-58935933400298/AnsiballZ_setup.py'
Jan 23 18:30:43 compute-0 sudo[56051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:43 compute-0 python3.9[56053]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:30:43 compute-0 sudo[56051]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:43 compute-0 sudo[56205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdhvcatevpiklcfwkosypjcqcujjqdbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193043.6726556-155-263446556899302/AnsiballZ_stat.py'
Jan 23 18:30:43 compute-0 sudo[56205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:44 compute-0 python3.9[56207]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:30:44 compute-0 sudo[56205]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:44 compute-0 sudo[56357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjgycnxpxczyfuvpvzurngiyijdsydke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193044.2979171-164-177064578326703/AnsiballZ_stat.py'
Jan 23 18:30:44 compute-0 sudo[56357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:44 compute-0 python3.9[56359]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:30:44 compute-0 sudo[56357]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:45 compute-0 sudo[56509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwvhjiozdozonadzfyidrfvzjqoosnck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193045.0237465-174-52145480223355/AnsiballZ_command.py'
Jan 23 18:30:45 compute-0 sudo[56509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:45 compute-0 python3.9[56511]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:30:45 compute-0 sudo[56509]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:46 compute-0 sudo[56662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swumgnxpzjjranflbysgoocqsifjnuri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193045.8710794-184-278185765040967/AnsiballZ_service_facts.py'
Jan 23 18:30:46 compute-0 sudo[56662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:46 compute-0 python3.9[56664]: ansible-service_facts Invoked
Jan 23 18:30:46 compute-0 network[56681]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 18:30:46 compute-0 network[56682]: 'network-scripts' will be removed from distribution in near future.
Jan 23 18:30:46 compute-0 network[56683]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 18:30:51 compute-0 sudo[56662]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:52 compute-0 sudo[56966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlgmvvmbacajmhzksbqjsrocryosaega ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769193051.6692972-199-65828013290704/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769193051.6692972-199-65828013290704/args'
Jan 23 18:30:52 compute-0 sudo[56966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:52 compute-0 sudo[56966]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:52 compute-0 sudo[57133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hspwuktbchcczvaiewmysuaogwqsmbta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193052.3419092-210-96702315583518/AnsiballZ_dnf.py'
Jan 23 18:30:52 compute-0 sudo[57133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:52 compute-0 python3.9[57135]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:30:54 compute-0 sudo[57133]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:55 compute-0 sudo[57286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyydvwzxyeaqqoamvqwgsslkrdwrhibo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193054.5093317-223-142632458920889/AnsiballZ_package_facts.py'
Jan 23 18:30:55 compute-0 sudo[57286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:55 compute-0 python3.9[57288]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 23 18:30:55 compute-0 sudo[57286]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:56 compute-0 sudo[57438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmsyitilzsppihssifohnanhddidzxvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193056.00033-233-268578153628174/AnsiballZ_stat.py'
Jan 23 18:30:56 compute-0 sudo[57438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:56 compute-0 python3.9[57440]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:30:56 compute-0 sudo[57438]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:56 compute-0 sudo[57563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktwykomkjonusqhpsssiejogqabpxcju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193056.00033-233-268578153628174/AnsiballZ_copy.py'
Jan 23 18:30:56 compute-0 sudo[57563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:57 compute-0 python3.9[57565]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769193056.00033-233-268578153628174/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:30:57 compute-0 sudo[57563]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:57 compute-0 sudo[57717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdspkxodtcvsrvyfpfumkgkpagszicku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193057.2487183-248-231260041494785/AnsiballZ_stat.py'
Jan 23 18:30:57 compute-0 sudo[57717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:57 compute-0 python3.9[57719]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:30:57 compute-0 sudo[57717]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:58 compute-0 sudo[57842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pufzottmqpvghgupvmbqvgownrldiwif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193057.2487183-248-231260041494785/AnsiballZ_copy.py'
Jan 23 18:30:58 compute-0 sudo[57842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:58 compute-0 python3.9[57844]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769193057.2487183-248-231260041494785/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:30:58 compute-0 sudo[57842]: pam_unix(sudo:session): session closed for user root
Jan 23 18:30:59 compute-0 sudo[57996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvhiuogehywphkgxhgasdwvcxysvqlvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193058.7014284-269-273055443571670/AnsiballZ_lineinfile.py'
Jan 23 18:30:59 compute-0 sudo[57996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:30:59 compute-0 python3.9[57998]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:30:59 compute-0 sudo[57996]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:00 compute-0 sudo[58150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lipbcgfereyixxnhwsppkmrlrppayaxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193059.8099031-284-201732558159819/AnsiballZ_setup.py'
Jan 23 18:31:00 compute-0 sudo[58150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:00 compute-0 python3.9[58152]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:31:00 compute-0 sudo[58150]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:01 compute-0 sudo[58234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuppasxiisadsynqacoleztttvpuwtub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193059.8099031-284-201732558159819/AnsiballZ_systemd.py'
Jan 23 18:31:01 compute-0 sudo[58234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:01 compute-0 python3.9[58236]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:31:01 compute-0 sudo[58234]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:02 compute-0 sudo[58388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzyohpeixdxhyseyzgxtnfavfognjukv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193061.9477882-300-79961448649963/AnsiballZ_setup.py'
Jan 23 18:31:02 compute-0 sudo[58388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:02 compute-0 python3.9[58390]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:31:02 compute-0 sudo[58388]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:03 compute-0 sudo[58472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijgbxqlwqhxqggcwweiflukwdvwseuha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193061.9477882-300-79961448649963/AnsiballZ_systemd.py'
Jan 23 18:31:03 compute-0 sudo[58472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:03 compute-0 python3.9[58474]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 18:31:03 compute-0 chronyd[794]: chronyd exiting
Jan 23 18:31:03 compute-0 systemd[1]: Stopping NTP client/server...
Jan 23 18:31:03 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 23 18:31:03 compute-0 systemd[1]: Stopped NTP client/server.
Jan 23 18:31:03 compute-0 systemd[1]: Starting NTP client/server...
Jan 23 18:31:03 compute-0 chronyd[58483]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 23 18:31:03 compute-0 chronyd[58483]: Frequency -23.932 +/- 0.166 ppm read from /var/lib/chrony/drift
Jan 23 18:31:03 compute-0 chronyd[58483]: Loaded seccomp filter (level 2)
Jan 23 18:31:03 compute-0 systemd[1]: Started NTP client/server.
Jan 23 18:31:03 compute-0 sudo[58472]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:03 compute-0 sshd-session[53534]: Connection closed by 192.168.122.30 port 43728
Jan 23 18:31:03 compute-0 sshd-session[53531]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:31:03 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 23 18:31:03 compute-0 systemd[1]: session-11.scope: Consumed 24.921s CPU time.
Jan 23 18:31:03 compute-0 systemd-logind[792]: Session 11 logged out. Waiting for processes to exit.
Jan 23 18:31:03 compute-0 systemd-logind[792]: Removed session 11.
Jan 23 18:31:09 compute-0 sshd-session[58509]: Accepted publickey for zuul from 192.168.122.30 port 42080 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:31:09 compute-0 systemd-logind[792]: New session 12 of user zuul.
Jan 23 18:31:09 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 23 18:31:09 compute-0 sshd-session[58509]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:31:10 compute-0 sudo[58662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktzkscmqdecuimogitprggmklerjuqpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193069.5696115-17-212264133192316/AnsiballZ_file.py'
Jan 23 18:31:10 compute-0 sudo[58662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:10 compute-0 python3.9[58664]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:10 compute-0 sudo[58662]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:10 compute-0 sudo[58814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfxrveebddxjawptewenpjydvtikaslu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193070.5166593-29-107646121585577/AnsiballZ_stat.py'
Jan 23 18:31:10 compute-0 sudo[58814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:11 compute-0 python3.9[58816]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:11 compute-0 sudo[58814]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:11 compute-0 sudo[58937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abibobjwokqaogkpjnxzkxnkwowlhybl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193070.5166593-29-107646121585577/AnsiballZ_copy.py'
Jan 23 18:31:11 compute-0 sudo[58937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:11 compute-0 python3.9[58939]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769193070.5166593-29-107646121585577/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:11 compute-0 sudo[58937]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:12 compute-0 sshd-session[58512]: Connection closed by 192.168.122.30 port 42080
Jan 23 18:31:12 compute-0 sshd-session[58509]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:31:12 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 23 18:31:12 compute-0 systemd[1]: session-12.scope: Consumed 1.552s CPU time.
Jan 23 18:31:12 compute-0 systemd-logind[792]: Session 12 logged out. Waiting for processes to exit.
Jan 23 18:31:12 compute-0 systemd-logind[792]: Removed session 12.
Jan 23 18:31:17 compute-0 sshd-session[58964]: Accepted publickey for zuul from 192.168.122.30 port 42082 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:31:17 compute-0 systemd-logind[792]: New session 13 of user zuul.
Jan 23 18:31:17 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 23 18:31:17 compute-0 sshd-session[58964]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:31:18 compute-0 python3.9[59117]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:31:19 compute-0 sudo[59271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biwdozyqnydouveiudwizzjcbftoipue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193078.4620879-28-131347673536040/AnsiballZ_file.py'
Jan 23 18:31:19 compute-0 sudo[59271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:19 compute-0 python3.9[59273]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:19 compute-0 sudo[59271]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:19 compute-0 sudo[59446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atwqfqlnibavmlxlwawttgoyquimozzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193079.543377-36-189513146926744/AnsiballZ_stat.py'
Jan 23 18:31:19 compute-0 sudo[59446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:20 compute-0 python3.9[59448]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:20 compute-0 sudo[59446]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:20 compute-0 sudo[59569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybkewyiodseflzjktrannanotebvugud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193079.543377-36-189513146926744/AnsiballZ_copy.py'
Jan 23 18:31:20 compute-0 sudo[59569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:20 compute-0 python3.9[59571]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769193079.543377-36-189513146926744/.source.json _original_basename=.j1ls4r3x follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:20 compute-0 sudo[59569]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:21 compute-0 sudo[59721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmtzfclunlaswoleqqlorrhegerltnjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193081.1707642-59-49893256570272/AnsiballZ_stat.py'
Jan 23 18:31:21 compute-0 sudo[59721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:21 compute-0 python3.9[59723]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:21 compute-0 sudo[59721]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:21 compute-0 sudo[59844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpaqxqomlgdqazptfjwvpziahmygckql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193081.1707642-59-49893256570272/AnsiballZ_copy.py'
Jan 23 18:31:21 compute-0 sudo[59844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:22 compute-0 python3.9[59846]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769193081.1707642-59-49893256570272/.source _original_basename=.p5_ij_1h follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:22 compute-0 sudo[59844]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:22 compute-0 sudo[59996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pemzapltqjefdwweaohjbijpyznlbcgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193082.312777-75-30355263630679/AnsiballZ_file.py'
Jan 23 18:31:22 compute-0 sudo[59996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:22 compute-0 python3.9[59998]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:31:22 compute-0 sudo[59996]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:23 compute-0 sudo[60148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxcadalritcbpddfzpexjyuhazulfxjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193082.9347916-83-145004688009756/AnsiballZ_stat.py'
Jan 23 18:31:23 compute-0 sudo[60148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:23 compute-0 python3.9[60150]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:23 compute-0 sudo[60148]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:23 compute-0 sudo[60271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyrahjfjjmczxgvcxzhrqhwntabatknz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193082.9347916-83-145004688009756/AnsiballZ_copy.py'
Jan 23 18:31:23 compute-0 sudo[60271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:23 compute-0 python3.9[60273]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769193082.9347916-83-145004688009756/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:31:23 compute-0 sudo[60271]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:24 compute-0 sudo[60423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyyqdupimtbvkrojvomiignzxddphdrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193084.0111396-83-254146471797141/AnsiballZ_stat.py'
Jan 23 18:31:24 compute-0 sudo[60423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:24 compute-0 python3.9[60425]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:24 compute-0 sudo[60423]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:24 compute-0 sudo[60546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pexcunjoytobwdrgrrzarzythusynugs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193084.0111396-83-254146471797141/AnsiballZ_copy.py'
Jan 23 18:31:24 compute-0 sudo[60546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:24 compute-0 python3.9[60548]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769193084.0111396-83-254146471797141/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:31:24 compute-0 sudo[60546]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:25 compute-0 sudo[60698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehjimgrvxhzvkdykmxrbqxibkuvtwmqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193085.113171-112-113924009957151/AnsiballZ_file.py'
Jan 23 18:31:25 compute-0 sudo[60698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:25 compute-0 python3.9[60700]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:25 compute-0 sudo[60698]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:25 compute-0 sudo[60850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhueyhtnngsubldwbdsrzzinhmfhqorc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193085.7368433-120-249613033970701/AnsiballZ_stat.py'
Jan 23 18:31:25 compute-0 sudo[60850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:26 compute-0 python3.9[60852]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:26 compute-0 sudo[60850]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:26 compute-0 sudo[60973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbstqpgqxyxknskyeshitxwsachxyafg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193085.7368433-120-249613033970701/AnsiballZ_copy.py'
Jan 23 18:31:26 compute-0 sudo[60973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:26 compute-0 python3.9[60975]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193085.7368433-120-249613033970701/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:26 compute-0 sudo[60973]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:27 compute-0 sudo[61125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsxbefledsjwubotrkqhegunarsxtndv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193086.8375492-135-198939830756592/AnsiballZ_stat.py'
Jan 23 18:31:27 compute-0 sudo[61125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:27 compute-0 python3.9[61127]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:27 compute-0 sudo[61125]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:27 compute-0 sudo[61248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sopcgxxbvdhjbxhyiuymodfpqdohyvgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193086.8375492-135-198939830756592/AnsiballZ_copy.py'
Jan 23 18:31:27 compute-0 sudo[61248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:27 compute-0 python3.9[61250]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193086.8375492-135-198939830756592/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:27 compute-0 sudo[61248]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:28 compute-0 sudo[61400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycjpcwvooqxlgbfejgqlnoiuragbswdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193087.934579-150-49924234962520/AnsiballZ_systemd.py'
Jan 23 18:31:28 compute-0 sudo[61400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:28 compute-0 python3.9[61402]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:31:28 compute-0 systemd[1]: Reloading.
Jan 23 18:31:28 compute-0 systemd-rc-local-generator[61427]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:31:28 compute-0 systemd-sysv-generator[61431]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:31:29 compute-0 systemd[1]: Reloading.
Jan 23 18:31:29 compute-0 systemd-sysv-generator[61469]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:31:29 compute-0 systemd-rc-local-generator[61465]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:31:29 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 23 18:31:29 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 23 18:31:29 compute-0 sudo[61400]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:29 compute-0 sudo[61625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzrwmnfxrirdsoxqqropwcrgwlzqrond ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193089.5346823-158-208067180820482/AnsiballZ_stat.py'
Jan 23 18:31:29 compute-0 sudo[61625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:29 compute-0 python3.9[61627]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:29 compute-0 sudo[61625]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:30 compute-0 sudo[61748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nasmgcyxoweanikssaagzegzezgsptwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193089.5346823-158-208067180820482/AnsiballZ_copy.py'
Jan 23 18:31:30 compute-0 sudo[61748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:30 compute-0 python3.9[61750]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193089.5346823-158-208067180820482/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:30 compute-0 sudo[61748]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:30 compute-0 sudo[61900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvddxkjqztbzswddiwpjzbgfracdmfvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193090.65094-173-51673499997437/AnsiballZ_stat.py'
Jan 23 18:31:30 compute-0 sudo[61900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:31 compute-0 python3.9[61902]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:31 compute-0 sudo[61900]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:31 compute-0 sudo[62023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkurzngzplywgcirmehfabcqlcqwqnbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193090.65094-173-51673499997437/AnsiballZ_copy.py'
Jan 23 18:31:31 compute-0 sudo[62023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:31 compute-0 python3.9[62025]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193090.65094-173-51673499997437/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:31 compute-0 sudo[62023]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:32 compute-0 sudo[62175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubbgnsoolgaptgmlobnifwokavdnqvju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193091.719724-188-229384557711765/AnsiballZ_systemd.py'
Jan 23 18:31:32 compute-0 sudo[62175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:32 compute-0 python3.9[62177]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:31:32 compute-0 systemd[1]: Reloading.
Jan 23 18:31:32 compute-0 systemd-rc-local-generator[62205]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:31:32 compute-0 systemd-sysv-generator[62208]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:31:32 compute-0 systemd[1]: Reloading.
Jan 23 18:31:32 compute-0 systemd-rc-local-generator[62241]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:31:32 compute-0 systemd-sysv-generator[62245]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:31:32 compute-0 systemd[1]: Starting Create netns directory...
Jan 23 18:31:32 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 23 18:31:32 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 23 18:31:32 compute-0 systemd[1]: Finished Create netns directory.
Jan 23 18:31:32 compute-0 sudo[62175]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:33 compute-0 python3.9[62403]: ansible-ansible.builtin.service_facts Invoked
Jan 23 18:31:33 compute-0 network[62420]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 18:31:33 compute-0 network[62421]: 'network-scripts' will be removed from distribution in near future.
Jan 23 18:31:33 compute-0 network[62422]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 18:31:37 compute-0 sudo[62682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiljlnjvniqqxnbwyfwzggbrwsvwasxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193097.3090231-204-211106427658849/AnsiballZ_systemd.py'
Jan 23 18:31:37 compute-0 sudo[62682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:38 compute-0 python3.9[62684]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:31:38 compute-0 systemd[1]: Reloading.
Jan 23 18:31:38 compute-0 systemd-rc-local-generator[62717]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:31:38 compute-0 systemd-sysv-generator[62720]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:31:38 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 23 18:31:38 compute-0 iptables.init[62724]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 23 18:31:38 compute-0 iptables.init[62724]: iptables: Flushing firewall rules: [  OK  ]
Jan 23 18:31:38 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 23 18:31:38 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 23 18:31:38 compute-0 sudo[62682]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:39 compute-0 sudo[62918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lquwwmcnaxxtixvfxobdfqldcmwsrmkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193098.880888-204-50861765105363/AnsiballZ_systemd.py'
Jan 23 18:31:39 compute-0 sudo[62918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:39 compute-0 python3.9[62920]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:31:39 compute-0 sudo[62918]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:39 compute-0 sudo[63072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkeeretabrlksfutumpcwrmixqcgoacu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193099.7148829-220-280549308210785/AnsiballZ_systemd.py'
Jan 23 18:31:39 compute-0 sudo[63072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:40 compute-0 python3.9[63074]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:31:40 compute-0 systemd[1]: Reloading.
Jan 23 18:31:40 compute-0 systemd-rc-local-generator[63104]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:31:40 compute-0 systemd-sysv-generator[63108]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:31:40 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 23 18:31:40 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 23 18:31:40 compute-0 sudo[63072]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:41 compute-0 sudo[63264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnhwiarolrhmssfcyhhnstaamggtgjlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193100.8155034-228-215119208139102/AnsiballZ_command.py'
Jan 23 18:31:41 compute-0 sudo[63264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:41 compute-0 python3.9[63266]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:31:41 compute-0 sudo[63264]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:42 compute-0 sudo[63417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlakcpwnzvatyveshlygozyaycoegsvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193101.8123517-242-103033353291871/AnsiballZ_stat.py'
Jan 23 18:31:42 compute-0 sudo[63417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:42 compute-0 python3.9[63419]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:42 compute-0 sudo[63417]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:42 compute-0 sudo[63542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cudlgkamzgxbywtnvdtevmfhtqzozqjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193101.8123517-242-103033353291871/AnsiballZ_copy.py'
Jan 23 18:31:42 compute-0 sudo[63542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:43 compute-0 python3.9[63544]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769193101.8123517-242-103033353291871/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:43 compute-0 sudo[63542]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:43 compute-0 sudo[63695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pycfwrqhwctktnabrcwtjegyczqlhplq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193103.2308476-257-238050961911326/AnsiballZ_systemd.py'
Jan 23 18:31:43 compute-0 sudo[63695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:43 compute-0 python3.9[63697]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 18:31:43 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 23 18:31:43 compute-0 sshd[1008]: Received SIGHUP; restarting.
Jan 23 18:31:43 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 23 18:31:43 compute-0 sshd[1008]: Server listening on 0.0.0.0 port 22.
Jan 23 18:31:43 compute-0 sshd[1008]: Server listening on :: port 22.
Jan 23 18:31:43 compute-0 sudo[63695]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:44 compute-0 sudo[63851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieefijvvlwtbymthndciozixkiomcrvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193104.1470752-265-128393491972303/AnsiballZ_file.py'
Jan 23 18:31:44 compute-0 sudo[63851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:44 compute-0 python3.9[63853]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:44 compute-0 sudo[63851]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:45 compute-0 sudo[64003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvmdufkujhxvjipfivbleqdxfxwiapjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193104.9032145-273-186849225650020/AnsiballZ_stat.py'
Jan 23 18:31:45 compute-0 sudo[64003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:45 compute-0 python3.9[64005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:45 compute-0 sudo[64003]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:45 compute-0 sudo[64126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgpthywmdjvtyrqjwijodjwtaqseclei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193104.9032145-273-186849225650020/AnsiballZ_copy.py'
Jan 23 18:31:45 compute-0 sudo[64126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:45 compute-0 python3.9[64128]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193104.9032145-273-186849225650020/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:45 compute-0 sudo[64126]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:46 compute-0 sudo[64278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnuhzrhzjhgokoltmdxhcqairpihsjbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193106.1351225-291-246870284642503/AnsiballZ_timezone.py'
Jan 23 18:31:46 compute-0 sudo[64278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:46 compute-0 python3.9[64280]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 23 18:31:46 compute-0 systemd[1]: Starting Time & Date Service...
Jan 23 18:31:46 compute-0 systemd[1]: Started Time & Date Service.
Jan 23 18:31:46 compute-0 sudo[64278]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:47 compute-0 sudo[64434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgtgcrjnonhxdwjylxkymxmnvzwhxvzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193107.0567436-300-241414798626843/AnsiballZ_file.py'
Jan 23 18:31:47 compute-0 sudo[64434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:47 compute-0 python3.9[64436]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:47 compute-0 sudo[64434]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:48 compute-0 sudo[64586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oormsapksddtcuwnnqtubdldgaagudal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193107.80764-308-120805830215874/AnsiballZ_stat.py'
Jan 23 18:31:48 compute-0 sudo[64586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:48 compute-0 python3.9[64588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:48 compute-0 sudo[64586]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:48 compute-0 sudo[64709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peewbphvvozzquuekngszdjtuqczybro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193107.80764-308-120805830215874/AnsiballZ_copy.py'
Jan 23 18:31:48 compute-0 sudo[64709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:48 compute-0 python3.9[64711]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769193107.80764-308-120805830215874/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:48 compute-0 sudo[64709]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:49 compute-0 sudo[64861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcjutybeifrdjmkjsnnewcgyoytuqory ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193108.9366953-323-81007700848321/AnsiballZ_stat.py'
Jan 23 18:31:49 compute-0 sudo[64861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:49 compute-0 python3.9[64863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:49 compute-0 sudo[64861]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:49 compute-0 sudo[64984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imqtnuisbgojgnkuqdhoxvoxpslvyusp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193108.9366953-323-81007700848321/AnsiballZ_copy.py'
Jan 23 18:31:49 compute-0 sudo[64984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:49 compute-0 python3.9[64986]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769193108.9366953-323-81007700848321/.source.yaml _original_basename=.v3u39cou follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:49 compute-0 sudo[64984]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:50 compute-0 sudo[65136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwakfridopcxjjbrhueecnknvfjvmahm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193110.101717-338-247933900209645/AnsiballZ_stat.py'
Jan 23 18:31:50 compute-0 sudo[65136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:50 compute-0 python3.9[65138]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:50 compute-0 sudo[65136]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:50 compute-0 sudo[65259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwmvdlphdxapwhxhhzptkfoyoaezxtum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193110.101717-338-247933900209645/AnsiballZ_copy.py'
Jan 23 18:31:50 compute-0 sudo[65259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:51 compute-0 python3.9[65261]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193110.101717-338-247933900209645/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:51 compute-0 sudo[65259]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:51 compute-0 sudo[65411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahjcpqulewmbcwznruziehhmyycsvgfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193111.3280416-353-148105018292692/AnsiballZ_command.py'
Jan 23 18:31:51 compute-0 sudo[65411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:51 compute-0 python3.9[65413]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:31:51 compute-0 sudo[65411]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:52 compute-0 sudo[65564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzvityifhxnrrkiquxntxwouoozsrlau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193111.9129455-361-99947766853974/AnsiballZ_command.py'
Jan 23 18:31:52 compute-0 sudo[65564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:52 compute-0 python3.9[65566]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:31:52 compute-0 sudo[65564]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:52 compute-0 sudo[65717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neijlepbqowrliaddgifaseqoqidlcvv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769193112.5475955-369-77852409777220/AnsiballZ_edpm_nftables_from_files.py'
Jan 23 18:31:52 compute-0 sudo[65717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:53 compute-0 python3[65719]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 23 18:31:53 compute-0 sudo[65717]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:53 compute-0 sudo[65869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsnsdanvnqrqsmhljybmihnfqmegmgmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193113.5078385-377-13907999063760/AnsiballZ_stat.py'
Jan 23 18:31:53 compute-0 sudo[65869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:54 compute-0 python3.9[65871]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:54 compute-0 sudo[65869]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:54 compute-0 sudo[65992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khalkbortamtgwtftnfzbxodyqrpftnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193113.5078385-377-13907999063760/AnsiballZ_copy.py'
Jan 23 18:31:54 compute-0 sudo[65992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:54 compute-0 python3.9[65994]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193113.5078385-377-13907999063760/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:54 compute-0 sudo[65992]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:55 compute-0 sudo[66144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adupqtxluzsxbpwqwsrwglkjgozuftxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193115.0756035-392-108728721589289/AnsiballZ_stat.py'
Jan 23 18:31:55 compute-0 sudo[66144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:55 compute-0 python3.9[66146]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:55 compute-0 sudo[66144]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:55 compute-0 sudo[66267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrtpwklwbtasqyahfhfozrsajhsaiisj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193115.0756035-392-108728721589289/AnsiballZ_copy.py'
Jan 23 18:31:55 compute-0 sudo[66267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:56 compute-0 python3.9[66269]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193115.0756035-392-108728721589289/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:56 compute-0 sudo[66267]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:56 compute-0 sudo[66419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktdkrfberspsziddfxrjtmnyuzhvwaaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193116.2615247-407-86597154952602/AnsiballZ_stat.py'
Jan 23 18:31:56 compute-0 sudo[66419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:56 compute-0 python3.9[66421]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:56 compute-0 sudo[66419]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:57 compute-0 sudo[66542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpltecxyavkznuactlebutnzzoiuhtnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193116.2615247-407-86597154952602/AnsiballZ_copy.py'
Jan 23 18:31:57 compute-0 sudo[66542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:57 compute-0 python3.9[66544]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193116.2615247-407-86597154952602/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:57 compute-0 sudo[66542]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:57 compute-0 sudo[66694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-strzdnrxawhxoxqzgurumrctjmogcgbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193117.458331-422-136878411889795/AnsiballZ_stat.py'
Jan 23 18:31:57 compute-0 sudo[66694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:57 compute-0 python3.9[66696]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:57 compute-0 sudo[66694]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:58 compute-0 sudo[66817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qifbdbrkmdyvqkoiquqtpkatuwprxrlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193117.458331-422-136878411889795/AnsiballZ_copy.py'
Jan 23 18:31:58 compute-0 sudo[66817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:58 compute-0 python3.9[66819]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193117.458331-422-136878411889795/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:58 compute-0 sudo[66817]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:59 compute-0 sudo[66969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qthdoekgtzokfqdkqwrzrnvvlmnrrdnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193118.703759-437-70481085166401/AnsiballZ_stat.py'
Jan 23 18:31:59 compute-0 sudo[66969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:59 compute-0 python3.9[66971]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:31:59 compute-0 sudo[66969]: pam_unix(sudo:session): session closed for user root
Jan 23 18:31:59 compute-0 sudo[67092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vurnodranasdptokffdxoicmcqryqvza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193118.703759-437-70481085166401/AnsiballZ_copy.py'
Jan 23 18:31:59 compute-0 sudo[67092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:31:59 compute-0 python3.9[67094]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193118.703759-437-70481085166401/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:31:59 compute-0 sudo[67092]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:00 compute-0 sudo[67244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbemwrwrzjratkscbtccqavvfngxatjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193119.9234881-452-113716995535358/AnsiballZ_file.py'
Jan 23 18:32:00 compute-0 sudo[67244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:00 compute-0 python3.9[67246]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:32:00 compute-0 sudo[67244]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:00 compute-0 sudo[67396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tehwptygfuchakvawohhwluhykmrnfvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193120.512342-460-86178732842381/AnsiballZ_command.py'
Jan 23 18:32:00 compute-0 sudo[67396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:00 compute-0 python3.9[67398]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:32:01 compute-0 sudo[67396]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:01 compute-0 sudo[67555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peshkxefivedotcfplvjmvjeretrkdqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193121.176445-468-48690330347575/AnsiballZ_blockinfile.py'
Jan 23 18:32:01 compute-0 sudo[67555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:01 compute-0 python3.9[67557]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:32:01 compute-0 sudo[67555]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:02 compute-0 sudo[67708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eotuawzwpexytpxhnqlwozlfgluqwrnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193122.0352106-477-258208717266889/AnsiballZ_file.py'
Jan 23 18:32:02 compute-0 sudo[67708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:02 compute-0 python3.9[67710]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:32:02 compute-0 sudo[67708]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:02 compute-0 sudo[67860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvtwqgbjljsxjjgjzasltgrsudhzsxrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193122.6615946-477-188736673770101/AnsiballZ_file.py'
Jan 23 18:32:02 compute-0 sudo[67860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:03 compute-0 python3.9[67862]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:32:03 compute-0 sudo[67860]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:03 compute-0 sudo[68012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acsmjkkxjdouaknvrataddhwsxyxsxvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193123.2977624-492-228100088142096/AnsiballZ_mount.py'
Jan 23 18:32:03 compute-0 sudo[68012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:03 compute-0 python3.9[68014]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 23 18:32:04 compute-0 sudo[68012]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:04 compute-0 sudo[68165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-escflapgjoehdonvhxlugpmgknfclizb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193124.1567152-492-148040701499651/AnsiballZ_mount.py'
Jan 23 18:32:04 compute-0 sudo[68165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:04 compute-0 python3.9[68167]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 23 18:32:04 compute-0 sudo[68165]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:05 compute-0 sshd-session[58967]: Connection closed by 192.168.122.30 port 42082
Jan 23 18:32:05 compute-0 sshd-session[58964]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:32:05 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 23 18:32:05 compute-0 systemd[1]: session-13.scope: Consumed 34.058s CPU time.
Jan 23 18:32:05 compute-0 systemd-logind[792]: Session 13 logged out. Waiting for processes to exit.
Jan 23 18:32:05 compute-0 systemd-logind[792]: Removed session 13.
Jan 23 18:32:10 compute-0 sshd-session[68193]: Accepted publickey for zuul from 192.168.122.30 port 49166 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:32:10 compute-0 systemd-logind[792]: New session 14 of user zuul.
Jan 23 18:32:10 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 23 18:32:10 compute-0 sshd-session[68193]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:32:11 compute-0 sudo[68346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxrlwyaqkfjornfpghuzqkhfkfnuadio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193130.6163516-16-253130371752350/AnsiballZ_tempfile.py'
Jan 23 18:32:11 compute-0 sudo[68346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:11 compute-0 python3.9[68348]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 23 18:32:11 compute-0 sudo[68346]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:11 compute-0 sudo[68498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unwmhxuhunnturjiwvkazrcapymdcpkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193131.4385092-28-195185678866250/AnsiballZ_stat.py'
Jan 23 18:32:11 compute-0 sudo[68498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:12 compute-0 python3.9[68500]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:32:12 compute-0 sudo[68498]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:12 compute-0 sudo[68650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuhvxppaxliamzapegveaehbhjimibih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193132.2493105-38-238683603456180/AnsiballZ_setup.py'
Jan 23 18:32:12 compute-0 sudo[68650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:13 compute-0 python3.9[68652]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:32:13 compute-0 sudo[68650]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:13 compute-0 sudo[68802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pacwjlbndkttzjdatyhsaphiffhmedbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193133.3383477-47-200522840345060/AnsiballZ_blockinfile.py'
Jan 23 18:32:13 compute-0 sudo[68802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:13 compute-0 python3.9[68804]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCobDOsKKtrBxxiW11IdJB+DSS7U3BXSBxEvj98QzFsxUb7LT72c/qQyw1FndkcLhf/ByB4jb62eQ2rMtw0t7LYZv8TnCwi/5eP3Hr86L7zpXFmzXd9PB/86i7dB7w5TocA1lSGVXlesybtRng2vdk4NKxmKFVKgAJk7aqIhUlF1ChIqfXa2Ec+MoOQQHBTJ4z2LF6Flc7HGurN1JS4EV7thWcZUo3ZlyiTNXsdofgbHnJDgm9mDvadlIusWxj90mDGG2GKCsThhd4TUOJ2qmruFYlbuao+E3TI29WS2hDLFfSfMri52xY4EdSf4ufNFe8o7Hn0XVT8F7+Ej4MBBWYZuRyi+g+sb6PtPSIdGf+UeJ36olx+bPEosasEAMFykrRuJ9nFga+N8nM8xEqkOeAC6b/cV9QkXrxXnsdwW4hw3qRlgpp3KdzUC04ogUerCZY/ZDY6WkkI7ctYP9C3XBa1365qJffsXZZXs0f+Efmg0zJQL1c9Bpc9LFxygb1rmes=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDBTQDdW9v4CHI+JR/pzChdYLJQc7s/5lZoWJSYnxBiI
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFfSTOePL8EECyCguUZi5cdDHdVP5VkJulKKYHgxi96opql/u9hWMjGIzUS1jDVZLEqQGHVCg+p+I+MX2WiarKk=
                                             create=True mode=0644 path=/tmp/ansible.ys_y579f state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:32:13 compute-0 sudo[68802]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:14 compute-0 sudo[68954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnegjjgujulgfrpejmpenaywjidvztld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193134.1389866-55-133151007268396/AnsiballZ_command.py'
Jan 23 18:32:14 compute-0 sudo[68954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:14 compute-0 python3.9[68956]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.ys_y579f' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:32:14 compute-0 sudo[68954]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:15 compute-0 sudo[69108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibfwcaoihwvqsvnrnilzjbuqbvlukgac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193135.1266038-63-201953887182920/AnsiballZ_file.py'
Jan 23 18:32:15 compute-0 sudo[69108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:15 compute-0 python3.9[69110]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.ys_y579f state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:32:15 compute-0 sudo[69108]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:16 compute-0 sshd-session[68196]: Connection closed by 192.168.122.30 port 49166
Jan 23 18:32:16 compute-0 sshd-session[68193]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:32:16 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 23 18:32:16 compute-0 systemd[1]: session-14.scope: Consumed 3.281s CPU time.
Jan 23 18:32:16 compute-0 systemd-logind[792]: Session 14 logged out. Waiting for processes to exit.
Jan 23 18:32:16 compute-0 systemd-logind[792]: Removed session 14.
Jan 23 18:32:16 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 23 18:32:21 compute-0 sshd-session[69137]: Accepted publickey for zuul from 192.168.122.30 port 37288 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:32:21 compute-0 systemd-logind[792]: New session 15 of user zuul.
Jan 23 18:32:21 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 23 18:32:21 compute-0 sshd-session[69137]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:32:22 compute-0 python3.9[69290]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:32:23 compute-0 sudo[69444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwvjcgoraglwxwtjnyqrjkapkzfttusd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193143.1868465-27-10429609355168/AnsiballZ_systemd.py'
Jan 23 18:32:23 compute-0 sudo[69444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:24 compute-0 python3.9[69446]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 23 18:32:24 compute-0 sudo[69444]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:24 compute-0 sudo[69598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ropaqifhnjtezevbzoubabgjzilifjuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193144.3023317-35-114152464315213/AnsiballZ_systemd.py'
Jan 23 18:32:24 compute-0 sudo[69598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:24 compute-0 python3.9[69600]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 18:32:24 compute-0 sudo[69598]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:25 compute-0 sudo[69751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fluuoykkwgyzcrgfouvhwjswtdhasixc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193145.1513953-44-32976280738187/AnsiballZ_command.py'
Jan 23 18:32:25 compute-0 sudo[69751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:25 compute-0 python3.9[69753]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:32:25 compute-0 sudo[69751]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:26 compute-0 sudo[69904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwmlboccutsryarqnbvwhadbzavfezqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193145.9356432-52-239924380700235/AnsiballZ_stat.py'
Jan 23 18:32:26 compute-0 sudo[69904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:26 compute-0 python3.9[69906]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:32:26 compute-0 sudo[69904]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:27 compute-0 sudo[70058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pympnirabrndbcfheihuiboppnfhnkqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193146.7387817-60-73719798755956/AnsiballZ_command.py'
Jan 23 18:32:27 compute-0 sudo[70058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:27 compute-0 python3.9[70060]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:32:27 compute-0 sudo[70058]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:27 compute-0 sudo[70213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwcreqtyvkbairwejxosrexgnpgcigwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193147.4024668-68-105451525830573/AnsiballZ_file.py'
Jan 23 18:32:27 compute-0 sudo[70213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:28 compute-0 python3.9[70215]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:32:28 compute-0 sudo[70213]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:28 compute-0 sshd-session[69140]: Connection closed by 192.168.122.30 port 37288
Jan 23 18:32:28 compute-0 sshd-session[69137]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:32:28 compute-0 systemd-logind[792]: Session 15 logged out. Waiting for processes to exit.
Jan 23 18:32:28 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 23 18:32:28 compute-0 systemd[1]: session-15.scope: Consumed 4.324s CPU time.
Jan 23 18:32:28 compute-0 systemd-logind[792]: Removed session 15.
Jan 23 18:32:33 compute-0 sshd-session[70240]: Accepted publickey for zuul from 192.168.122.30 port 32888 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:32:33 compute-0 systemd-logind[792]: New session 16 of user zuul.
Jan 23 18:32:33 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 23 18:32:33 compute-0 sshd-session[70240]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:32:34 compute-0 python3.9[70393]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:32:35 compute-0 sudo[70547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvzttmjwrqfuqqsurytqxznlbpdyfxhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193155.0452063-29-187812502202209/AnsiballZ_setup.py'
Jan 23 18:32:35 compute-0 sudo[70547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:35 compute-0 python3.9[70549]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:32:35 compute-0 sudo[70547]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:36 compute-0 sudo[70631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyupfarvbixmzujzpsaqmdqvqjchkmsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193155.0452063-29-187812502202209/AnsiballZ_dnf.py'
Jan 23 18:32:36 compute-0 sudo[70631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:36 compute-0 python3.9[70633]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 18:32:37 compute-0 sudo[70631]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:38 compute-0 python3.9[70784]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:32:39 compute-0 python3.9[70935]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 18:32:40 compute-0 python3.9[71085]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:32:40 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 18:32:41 compute-0 python3.9[71236]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:32:41 compute-0 sshd-session[70243]: Connection closed by 192.168.122.30 port 32888
Jan 23 18:32:41 compute-0 sshd-session[70240]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:32:41 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 23 18:32:41 compute-0 systemd[1]: session-16.scope: Consumed 5.974s CPU time.
Jan 23 18:32:41 compute-0 systemd-logind[792]: Session 16 logged out. Waiting for processes to exit.
Jan 23 18:32:41 compute-0 systemd-logind[792]: Removed session 16.
Jan 23 18:32:50 compute-0 sshd-session[71261]: Accepted publickey for zuul from 38.129.56.53 port 54696 ssh2: RSA SHA256:ZYPc6R6mrhDH6kGu4oaZ6KY9jveRJgYu3lvZrCF0Dxw
Jan 23 18:32:50 compute-0 systemd-logind[792]: New session 17 of user zuul.
Jan 23 18:32:50 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 23 18:32:50 compute-0 sshd-session[71261]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:32:50 compute-0 sudo[71337]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqbsavgfxzvnzbnyfbekuungnqhwtdxt ; /usr/bin/python3'
Jan 23 18:32:50 compute-0 sudo[71337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:50 compute-0 useradd[71341]: new group: name=ceph-admin, GID=42478
Jan 23 18:32:50 compute-0 useradd[71341]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 23 18:32:50 compute-0 sudo[71337]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:51 compute-0 sudo[71423]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgppcfjcjzbaffpaqzllwkqqsnzonpjh ; /usr/bin/python3'
Jan 23 18:32:51 compute-0 sudo[71423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:51 compute-0 sudo[71423]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:51 compute-0 sudo[71496]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruupdhsyaoviyuzbsnvjrzafvsangqvp ; /usr/bin/python3'
Jan 23 18:32:51 compute-0 sudo[71496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:51 compute-0 sudo[71496]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:52 compute-0 sudo[71546]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujqukdiknmekmsdcjnbjomuhgfqymrvp ; /usr/bin/python3'
Jan 23 18:32:52 compute-0 sudo[71546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:52 compute-0 sudo[71546]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:52 compute-0 sudo[71572]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdmvlpkcnvtaexnyjraynibvdwlajlbb ; /usr/bin/python3'
Jan 23 18:32:52 compute-0 sudo[71572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:52 compute-0 sudo[71572]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:52 compute-0 sudo[71598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdwekhkcaindafhxvccnypixkmsbwgul ; /usr/bin/python3'
Jan 23 18:32:52 compute-0 sudo[71598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:52 compute-0 sudo[71598]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:53 compute-0 sudo[71624]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whybtnogbdjfoeuwzxsbmqdoeomctsqr ; /usr/bin/python3'
Jan 23 18:32:53 compute-0 sudo[71624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:53 compute-0 sudo[71624]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:53 compute-0 sudo[71702]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltgtouqrqhqqkonimjzyqmxfahgnnmfe ; /usr/bin/python3'
Jan 23 18:32:53 compute-0 sudo[71702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:53 compute-0 sudo[71702]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:53 compute-0 sudo[71775]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajnyrjhmvnwaizedxicnjhfwgvyildrq ; /usr/bin/python3'
Jan 23 18:32:53 compute-0 sudo[71775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:54 compute-0 sudo[71775]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:54 compute-0 sudo[71877]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfbgsluujpqtrjaavpsidwevqacgqgew ; /usr/bin/python3'
Jan 23 18:32:54 compute-0 sudo[71877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:54 compute-0 sudo[71877]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:54 compute-0 sudo[71950]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnjeugzidyxxxyuvtcbtkgylfndosgsf ; /usr/bin/python3'
Jan 23 18:32:54 compute-0 sudo[71950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:54 compute-0 sudo[71950]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:55 compute-0 sudo[72000]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtlrawqgpjxyzxrvishgfxrxaqyjpywq ; /usr/bin/python3'
Jan 23 18:32:55 compute-0 sudo[72000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:55 compute-0 python3[72002]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:32:56 compute-0 sudo[72000]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:56 compute-0 sudo[72096]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxumxzkfjjgylehjqxpgnxrzryzgukke ; /usr/bin/python3'
Jan 23 18:32:56 compute-0 sudo[72096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:57 compute-0 python3[72098]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 23 18:32:58 compute-0 sudo[72096]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:58 compute-0 sudo[72123]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqtwyuoafspebrqglgvphngynqewvtoh ; /usr/bin/python3'
Jan 23 18:32:58 compute-0 sudo[72123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:58 compute-0 python3[72125]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 18:32:58 compute-0 sudo[72123]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:58 compute-0 sudo[72149]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvfffmfpnqkzhvkdcqzjbzotsosflfxd ; /usr/bin/python3'
Jan 23 18:32:58 compute-0 sudo[72149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:59 compute-0 python3[72151]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:32:59 compute-0 kernel: loop: module loaded
Jan 23 18:32:59 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Jan 23 18:32:59 compute-0 sudo[72149]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:59 compute-0 sudo[72185]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhakyevfccrfuqwlgiyszsvlptwaplao ; /usr/bin/python3'
Jan 23 18:32:59 compute-0 sudo[72185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:32:59 compute-0 python3[72187]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:32:59 compute-0 lvm[72190]: PV /dev/loop3 not used.
Jan 23 18:32:59 compute-0 lvm[72199]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:32:59 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 23 18:32:59 compute-0 lvm[72201]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 23 18:32:59 compute-0 sudo[72185]: pam_unix(sudo:session): session closed for user root
Jan 23 18:32:59 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 23 18:32:59 compute-0 sudo[72278]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijnrablanajcqczgcogmayglpzmcxccc ; /usr/bin/python3'
Jan 23 18:32:59 compute-0 sudo[72278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:00 compute-0 python3[72280]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 18:33:00 compute-0 sudo[72278]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:00 compute-0 sudo[72351]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pehdeskkeewrmjuvjlplkxqylimkpalc ; /usr/bin/python3'
Jan 23 18:33:00 compute-0 sudo[72351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:00 compute-0 python3[72353]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769193179.7686663-36169-25661427270705/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:33:00 compute-0 sudo[72351]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:00 compute-0 sudo[72401]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkwaawhxswvqoqrwyrtpylxlgqwjidph ; /usr/bin/python3'
Jan 23 18:33:00 compute-0 sudo[72401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:01 compute-0 python3[72403]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:33:01 compute-0 systemd[1]: Reloading.
Jan 23 18:33:01 compute-0 systemd-rc-local-generator[72431]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:33:01 compute-0 systemd-sysv-generator[72435]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:33:01 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 23 18:33:01 compute-0 bash[72442]: /dev/loop3: [64513]:4328448 (/var/lib/ceph-osd-0.img)
Jan 23 18:33:01 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 23 18:33:01 compute-0 sudo[72401]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:01 compute-0 lvm[72443]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:33:01 compute-0 lvm[72443]: VG ceph_vg0 finished
Jan 23 18:33:01 compute-0 sudo[72467]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfbubhmzehescxwvuctagufiqrydbhym ; /usr/bin/python3'
Jan 23 18:33:01 compute-0 sudo[72467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:01 compute-0 python3[72469]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 23 18:33:03 compute-0 sudo[72467]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:03 compute-0 sudo[72494]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkkqbvbbcgbrmcmpqmwmfdziotndiesj ; /usr/bin/python3'
Jan 23 18:33:03 compute-0 sudo[72494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:03 compute-0 python3[72496]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 18:33:03 compute-0 sudo[72494]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:03 compute-0 sudo[72520]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjjjvlfvvnzhlupgaxabwzmjkysyhpvm ; /usr/bin/python3'
Jan 23 18:33:03 compute-0 sudo[72520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:03 compute-0 python3[72522]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:33:04 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Jan 23 18:33:04 compute-0 sudo[72520]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:04 compute-0 sudo[72552]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyvkniocfrjxqkuazyxvapmzloqvasiu ; /usr/bin/python3'
Jan 23 18:33:04 compute-0 sudo[72552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:04 compute-0 python3[72554]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:33:05 compute-0 lvm[72557]: PV /dev/loop4 not used.
Jan 23 18:33:05 compute-0 lvm[72559]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:33:05 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Jan 23 18:33:05 compute-0 lvm[72567]:   1 logical volume(s) in volume group "ceph_vg1" now active
Jan 23 18:33:05 compute-0 lvm[72570]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:33:05 compute-0 lvm[72570]: VG ceph_vg1 finished
Jan 23 18:33:05 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Jan 23 18:33:05 compute-0 sudo[72552]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:05 compute-0 sudo[72646]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhhmwgjotsoitedfpgdbbhgfmorqxxbt ; /usr/bin/python3'
Jan 23 18:33:05 compute-0 sudo[72646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:05 compute-0 python3[72648]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 18:33:05 compute-0 sudo[72646]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:06 compute-0 sudo[72719]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwpezmtwgkgfxlkzcejsdijtmgjandte ; /usr/bin/python3'
Jan 23 18:33:06 compute-0 sudo[72719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:06 compute-0 python3[72721]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769193185.7052839-36196-76188857292182/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:33:06 compute-0 sudo[72719]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:06 compute-0 sudo[72769]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxibdhjermalhjowrmupdnrtxijnhkzx ; /usr/bin/python3'
Jan 23 18:33:06 compute-0 sudo[72769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:06 compute-0 python3[72771]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:33:06 compute-0 systemd[1]: Reloading.
Jan 23 18:33:06 compute-0 systemd-rc-local-generator[72800]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:33:06 compute-0 systemd-sysv-generator[72803]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:33:07 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 23 18:33:07 compute-0 bash[72810]: /dev/loop4: [64513]:4328903 (/var/lib/ceph-osd-1.img)
Jan 23 18:33:07 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 23 18:33:07 compute-0 sudo[72769]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:07 compute-0 lvm[72812]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:33:07 compute-0 lvm[72812]: VG ceph_vg1 finished
Jan 23 18:33:07 compute-0 sudo[72836]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpxflljfmfthdbmbirabqximjfroatye ; /usr/bin/python3'
Jan 23 18:33:07 compute-0 sudo[72836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:07 compute-0 python3[72838]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 23 18:33:09 compute-0 sudo[72836]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:09 compute-0 sudo[72863]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtkqnbdhsafzogqyyycmmsuzaiwaatix ; /usr/bin/python3'
Jan 23 18:33:09 compute-0 sudo[72863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:09 compute-0 python3[72865]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 18:33:09 compute-0 sudo[72863]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:09 compute-0 sudo[72889]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mirqafivwgjxmxsuqapjklzhnujoclaj ; /usr/bin/python3'
Jan 23 18:33:09 compute-0 sudo[72889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:09 compute-0 python3[72891]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:33:09 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Jan 23 18:33:09 compute-0 sudo[72889]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:10 compute-0 sudo[72920]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwgrxfplwqjozwfeufhfmpqeamecvkxw ; /usr/bin/python3'
Jan 23 18:33:10 compute-0 sudo[72920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:10 compute-0 python3[72922]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:33:10 compute-0 lvm[72925]: PV /dev/loop5 not used.
Jan 23 18:33:10 compute-0 lvm[72927]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:33:10 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Jan 23 18:33:10 compute-0 lvm[72930]:   1 logical volume(s) in volume group "ceph_vg2" now active
Jan 23 18:33:10 compute-0 lvm[72938]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:33:10 compute-0 lvm[72938]: VG ceph_vg2 finished
Jan 23 18:33:10 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Jan 23 18:33:10 compute-0 sudo[72920]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:10 compute-0 sudo[73014]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjrmvxxtkgzhhsjvzxpryfmnxeaeabjt ; /usr/bin/python3'
Jan 23 18:33:10 compute-0 sudo[73014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:10 compute-0 python3[73016]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 18:33:11 compute-0 sudo[73014]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:11 compute-0 sudo[73087]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtmxyjdvnfrgfhzstbgwiwpzyzppphlr ; /usr/bin/python3'
Jan 23 18:33:11 compute-0 sudo[73087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:11 compute-0 python3[73089]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769193190.6961195-36223-39934243779020/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:33:11 compute-0 sudo[73087]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:11 compute-0 sudo[73137]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gogzdqupezjnqzonywtwadouwlukmxfi ; /usr/bin/python3'
Jan 23 18:33:11 compute-0 sudo[73137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:11 compute-0 python3[73139]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:33:11 compute-0 systemd[1]: Reloading.
Jan 23 18:33:11 compute-0 systemd-rc-local-generator[73167]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:33:11 compute-0 systemd-sysv-generator[73171]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:33:12 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 23 18:33:12 compute-0 bash[73179]: /dev/loop5: [64513]:4328907 (/var/lib/ceph-osd-2.img)
Jan 23 18:33:12 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 23 18:33:12 compute-0 lvm[73181]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:33:12 compute-0 lvm[73181]: VG ceph_vg2 finished
Jan 23 18:33:12 compute-0 sudo[73137]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:13 compute-0 chronyd[58483]: Selected source 54.39.23.64 (pool.ntp.org)
Jan 23 18:33:14 compute-0 python3[73205]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:33:16 compute-0 sudo[73296]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwfoptfcvzomonuujdhgvdpfixxmdszp ; /usr/bin/python3'
Jan 23 18:33:16 compute-0 sudo[73296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:16 compute-0 python3[73298]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 23 18:33:19 compute-0 sudo[73296]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:19 compute-0 sudo[73353]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykokvphxcaqqrwkfkdmuuspvhtzmqman ; /usr/bin/python3'
Jan 23 18:33:19 compute-0 sudo[73353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:19 compute-0 python3[73355]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 23 18:33:22 compute-0 groupadd[73365]: group added to /etc/group: name=cephadm, GID=993
Jan 23 18:33:22 compute-0 groupadd[73365]: group added to /etc/gshadow: name=cephadm
Jan 23 18:33:22 compute-0 groupadd[73365]: new group: name=cephadm, GID=993
Jan 23 18:33:22 compute-0 useradd[73372]: new user: name=cephadm, UID=992, GID=993, home=/var/lib/cephadm, shell=/bin/bash, from=none
Jan 23 18:33:22 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 18:33:22 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 18:33:22 compute-0 sudo[73353]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:23 compute-0 sudo[73471]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xslgaiwqxzbgunlreuxdtdruppbavimm ; /usr/bin/python3'
Jan 23 18:33:23 compute-0 sudo[73471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:23 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 18:33:23 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 18:33:23 compute-0 systemd[1]: run-r00e4183a4511407b90fccc22f30734b8.service: Deactivated successfully.
Jan 23 18:33:23 compute-0 python3[73473]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 18:33:23 compute-0 sudo[73471]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:23 compute-0 sudo[73500]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzjhvcjpevtvgfugyegcoemjklqavihg ; /usr/bin/python3'
Jan 23 18:33:23 compute-0 sudo[73500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:23 compute-0 python3[73502]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:33:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 18:33:23 compute-0 sudo[73500]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:24 compute-0 sudo[73539]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkfsrhhkuyzwmmotcjftqfyrpxyakufv ; /usr/bin/python3'
Jan 23 18:33:24 compute-0 sudo[73539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:24 compute-0 python3[73541]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:33:24 compute-0 sudo[73539]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:24 compute-0 sudo[73565]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaioowcjwysxnxrbbvmahylaiszusroe ; /usr/bin/python3'
Jan 23 18:33:24 compute-0 sudo[73565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:24 compute-0 python3[73567]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:33:24 compute-0 sudo[73565]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:25 compute-0 sudo[73643]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miycjuuigbeofyrfzpcnrfwsqfzbipms ; /usr/bin/python3'
Jan 23 18:33:25 compute-0 sudo[73643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:25 compute-0 python3[73645]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 18:33:25 compute-0 sudo[73643]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:25 compute-0 sudo[73716]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztfwwcfuumfbpdvjbefbuxzpkuqkumpz ; /usr/bin/python3'
Jan 23 18:33:25 compute-0 sudo[73716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:25 compute-0 python3[73718]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769193205.117398-36371-181486909574778/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:33:25 compute-0 sudo[73716]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:26 compute-0 sudo[73818]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmyufajotthlwpgqyxdmerufclepmjhp ; /usr/bin/python3'
Jan 23 18:33:26 compute-0 sudo[73818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:26 compute-0 python3[73820]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 18:33:26 compute-0 sudo[73818]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:26 compute-0 sudo[73891]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yddchdvyncheftooxiidqyorkmikazzg ; /usr/bin/python3'
Jan 23 18:33:26 compute-0 sudo[73891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:26 compute-0 python3[73893]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769193206.1905115-36389-59014058653770/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:33:26 compute-0 sudo[73891]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:27 compute-0 sudo[73941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcvtyfypedsvbmwxhpukkcmfqnhgkdvi ; /usr/bin/python3'
Jan 23 18:33:27 compute-0 sudo[73941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:27 compute-0 python3[73943]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 18:33:27 compute-0 sudo[73941]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:27 compute-0 sudo[73969]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyqaclxdyqwekpwbqfafwmymkimsiggv ; /usr/bin/python3'
Jan 23 18:33:27 compute-0 sudo[73969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:27 compute-0 python3[73971]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 18:33:27 compute-0 sudo[73969]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:27 compute-0 sudo[73997]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azbjsqmiitrjpcfsxvqjzpytpputabth ; /usr/bin/python3'
Jan 23 18:33:27 compute-0 sudo[73997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:27 compute-0 python3[73999]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 18:33:27 compute-0 sudo[73997]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:28 compute-0 python3[74025]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 18:33:28 compute-0 sudo[74049]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejqeywkwxlyqntgqnmnjsxfbjzwgmpsp ; /usr/bin/python3'
Jan 23 18:33:28 compute-0 sudo[74049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:33:28 compute-0 python3[74051]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:33:28 compute-0 sshd-session[74055]: Accepted publickey for ceph-admin from 192.168.122.100 port 38770 ssh2: RSA SHA256:Y9YAWoV/FtycHMpA7ujLfTFoetVzWSRVzNMfrWybZ/s
Jan 23 18:33:28 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 23 18:33:28 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 23 18:33:28 compute-0 systemd-logind[792]: New session 18 of user ceph-admin.
Jan 23 18:33:28 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 23 18:33:28 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 23 18:33:28 compute-0 systemd[74059]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 18:33:28 compute-0 systemd[74059]: Queued start job for default target Main User Target.
Jan 23 18:33:28 compute-0 systemd[74059]: Created slice User Application Slice.
Jan 23 18:33:28 compute-0 systemd[74059]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 23 18:33:28 compute-0 systemd[74059]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 18:33:28 compute-0 systemd[74059]: Reached target Paths.
Jan 23 18:33:28 compute-0 systemd[74059]: Reached target Timers.
Jan 23 18:33:28 compute-0 systemd[74059]: Starting D-Bus User Message Bus Socket...
Jan 23 18:33:28 compute-0 systemd[74059]: Starting Create User's Volatile Files and Directories...
Jan 23 18:33:28 compute-0 systemd[74059]: Finished Create User's Volatile Files and Directories.
Jan 23 18:33:28 compute-0 systemd[74059]: Listening on D-Bus User Message Bus Socket.
Jan 23 18:33:28 compute-0 systemd[74059]: Reached target Sockets.
Jan 23 18:33:28 compute-0 systemd[74059]: Reached target Basic System.
Jan 23 18:33:28 compute-0 systemd[74059]: Reached target Main User Target.
Jan 23 18:33:28 compute-0 systemd[74059]: Startup finished in 130ms.
Jan 23 18:33:28 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 23 18:33:28 compute-0 systemd[1]: Started Session 18 of User ceph-admin.
Jan 23 18:33:28 compute-0 sshd-session[74055]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 18:33:29 compute-0 sudo[74075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Jan 23 18:33:29 compute-0 sudo[74075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:33:29 compute-0 sudo[74075]: pam_unix(sudo:session): session closed for user root
Jan 23 18:33:29 compute-0 sshd-session[74074]: Received disconnect from 192.168.122.100 port 38770:11: disconnected by user
Jan 23 18:33:29 compute-0 sshd-session[74074]: Disconnected from user ceph-admin 192.168.122.100 port 38770
Jan 23 18:33:29 compute-0 sshd-session[74055]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 23 18:33:29 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 23 18:33:29 compute-0 systemd-logind[792]: Session 18 logged out. Waiting for processes to exit.
Jan 23 18:33:29 compute-0 systemd-logind[792]: Removed session 18.
Jan 23 18:33:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 18:33:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 18:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3841798943-lower\x2dmapped.mount: Deactivated successfully.
Jan 23 18:33:39 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 23 18:33:39 compute-0 systemd[74059]: Activating special unit Exit the Session...
Jan 23 18:33:39 compute-0 systemd[74059]: Stopped target Main User Target.
Jan 23 18:33:39 compute-0 systemd[74059]: Stopped target Basic System.
Jan 23 18:33:39 compute-0 systemd[74059]: Stopped target Paths.
Jan 23 18:33:39 compute-0 systemd[74059]: Stopped target Sockets.
Jan 23 18:33:39 compute-0 systemd[74059]: Stopped target Timers.
Jan 23 18:33:39 compute-0 systemd[74059]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 23 18:33:39 compute-0 systemd[74059]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 23 18:33:39 compute-0 systemd[74059]: Closed D-Bus User Message Bus Socket.
Jan 23 18:33:39 compute-0 systemd[74059]: Stopped Create User's Volatile Files and Directories.
Jan 23 18:33:39 compute-0 systemd[74059]: Removed slice User Application Slice.
Jan 23 18:33:39 compute-0 systemd[74059]: Reached target Shutdown.
Jan 23 18:33:39 compute-0 systemd[74059]: Finished Exit the Session.
Jan 23 18:33:39 compute-0 systemd[74059]: Reached target Exit the Session.
Jan 23 18:33:39 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 23 18:33:39 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 23 18:33:39 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 23 18:33:39 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 23 18:33:39 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 23 18:33:39 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 23 18:33:39 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 23 18:33:56 compute-0 podman[74152]: 2026-01-23 18:33:56.253003179 +0000 UTC m=+26.918228496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:33:56 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 18:33:56 compute-0 podman[74213]: 2026-01-23 18:33:56.336034435 +0000 UTC m=+0.050313121 container create 5a7266fa900e81d14a621b83d345cbf7e723c44371e559c66629010d5b69c377 (image=quay.io/ceph/ceph:v20, name=fervent_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 18:33:56 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 23 18:33:56 compute-0 systemd[1]: Started libpod-conmon-5a7266fa900e81d14a621b83d345cbf7e723c44371e559c66629010d5b69c377.scope.
Jan 23 18:33:56 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:33:56 compute-0 podman[74213]: 2026-01-23 18:33:56.312663748 +0000 UTC m=+0.026942454 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:33:56 compute-0 podman[74213]: 2026-01-23 18:33:56.427012877 +0000 UTC m=+0.141291583 container init 5a7266fa900e81d14a621b83d345cbf7e723c44371e559c66629010d5b69c377 (image=quay.io/ceph/ceph:v20, name=fervent_nobel, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:33:56 compute-0 podman[74213]: 2026-01-23 18:33:56.433903797 +0000 UTC m=+0.148182483 container start 5a7266fa900e81d14a621b83d345cbf7e723c44371e559c66629010d5b69c377 (image=quay.io/ceph/ceph:v20, name=fervent_nobel, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 18:33:56 compute-0 podman[74213]: 2026-01-23 18:33:56.437493416 +0000 UTC m=+0.151772102 container attach 5a7266fa900e81d14a621b83d345cbf7e723c44371e559c66629010d5b69c377 (image=quay.io/ceph/ceph:v20, name=fervent_nobel, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 23 18:33:56 compute-0 fervent_nobel[74227]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 23 18:33:56 compute-0 systemd[1]: libpod-5a7266fa900e81d14a621b83d345cbf7e723c44371e559c66629010d5b69c377.scope: Deactivated successfully.
Jan 23 18:33:56 compute-0 podman[74213]: 2026-01-23 18:33:56.542933104 +0000 UTC m=+0.257211790 container died 5a7266fa900e81d14a621b83d345cbf7e723c44371e559c66629010d5b69c377 (image=quay.io/ceph/ceph:v20, name=fervent_nobel, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 18:33:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a122bd28e49e142a04954a1bc75b7aacfef360097c8cada80c308c1de053a3c6-merged.mount: Deactivated successfully.
Jan 23 18:33:56 compute-0 podman[74213]: 2026-01-23 18:33:56.587277077 +0000 UTC m=+0.301555763 container remove 5a7266fa900e81d14a621b83d345cbf7e723c44371e559c66629010d5b69c377 (image=quay.io/ceph/ceph:v20, name=fervent_nobel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 18:33:56 compute-0 systemd[1]: libpod-conmon-5a7266fa900e81d14a621b83d345cbf7e723c44371e559c66629010d5b69c377.scope: Deactivated successfully.
Jan 23 18:33:56 compute-0 podman[74247]: 2026-01-23 18:33:56.646810044 +0000 UTC m=+0.040112600 container create f58139c6f0e3790340783aa879c380811c3c2a6f02d85017869c35518956af94 (image=quay.io/ceph/ceph:v20, name=wonderful_raman, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 18:33:56 compute-0 systemd[1]: Started libpod-conmon-f58139c6f0e3790340783aa879c380811c3c2a6f02d85017869c35518956af94.scope.
Jan 23 18:33:56 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:33:56 compute-0 podman[74247]: 2026-01-23 18:33:56.720075259 +0000 UTC m=+0.113377835 container init f58139c6f0e3790340783aa879c380811c3c2a6f02d85017869c35518956af94 (image=quay.io/ceph/ceph:v20, name=wonderful_raman, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 18:33:56 compute-0 podman[74247]: 2026-01-23 18:33:56.628255337 +0000 UTC m=+0.021557933 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:33:56 compute-0 podman[74247]: 2026-01-23 18:33:56.725522003 +0000 UTC m=+0.118824569 container start f58139c6f0e3790340783aa879c380811c3c2a6f02d85017869c35518956af94 (image=quay.io/ceph/ceph:v20, name=wonderful_raman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:33:56 compute-0 wonderful_raman[74264]: 167 167
Jan 23 18:33:56 compute-0 systemd[1]: libpod-f58139c6f0e3790340783aa879c380811c3c2a6f02d85017869c35518956af94.scope: Deactivated successfully.
Jan 23 18:33:56 compute-0 podman[74247]: 2026-01-23 18:33:56.730039196 +0000 UTC m=+0.123341762 container attach f58139c6f0e3790340783aa879c380811c3c2a6f02d85017869c35518956af94 (image=quay.io/ceph/ceph:v20, name=wonderful_raman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:33:56 compute-0 podman[74247]: 2026-01-23 18:33:56.730410604 +0000 UTC m=+0.123713160 container died f58139c6f0e3790340783aa879c380811c3c2a6f02d85017869c35518956af94 (image=quay.io/ceph/ceph:v20, name=wonderful_raman, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 23 18:33:56 compute-0 podman[74247]: 2026-01-23 18:33:56.773775703 +0000 UTC m=+0.167078299 container remove f58139c6f0e3790340783aa879c380811c3c2a6f02d85017869c35518956af94 (image=quay.io/ceph/ceph:v20, name=wonderful_raman, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:33:56 compute-0 systemd[1]: libpod-conmon-f58139c6f0e3790340783aa879c380811c3c2a6f02d85017869c35518956af94.scope: Deactivated successfully.
Jan 23 18:33:56 compute-0 podman[74281]: 2026-01-23 18:33:56.831362502 +0000 UTC m=+0.040035988 container create d7ad03636c088296182a8a5a43ee6c2ad56e660dd60006021cae125b4e708eec (image=quay.io/ceph/ceph:v20, name=sweet_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 18:33:56 compute-0 systemd[1]: Started libpod-conmon-d7ad03636c088296182a8a5a43ee6c2ad56e660dd60006021cae125b4e708eec.scope.
Jan 23 18:33:56 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:33:56 compute-0 podman[74281]: 2026-01-23 18:33:56.897928722 +0000 UTC m=+0.106602208 container init d7ad03636c088296182a8a5a43ee6c2ad56e660dd60006021cae125b4e708eec (image=quay.io/ceph/ceph:v20, name=sweet_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:33:56 compute-0 podman[74281]: 2026-01-23 18:33:56.903022588 +0000 UTC m=+0.111696064 container start d7ad03636c088296182a8a5a43ee6c2ad56e660dd60006021cae125b4e708eec (image=quay.io/ceph/ceph:v20, name=sweet_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 23 18:33:56 compute-0 podman[74281]: 2026-01-23 18:33:56.907685273 +0000 UTC m=+0.116358739 container attach d7ad03636c088296182a8a5a43ee6c2ad56e660dd60006021cae125b4e708eec (image=quay.io/ceph/ceph:v20, name=sweet_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:33:56 compute-0 podman[74281]: 2026-01-23 18:33:56.81465281 +0000 UTC m=+0.023326266 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:33:56 compute-0 sweet_hypatia[74297]: AQAUv3NpRf4TNxAA55sW3lObByhy4J8A27LVZw==
Jan 23 18:33:56 compute-0 systemd[1]: libpod-d7ad03636c088296182a8a5a43ee6c2ad56e660dd60006021cae125b4e708eec.scope: Deactivated successfully.
Jan 23 18:33:56 compute-0 podman[74281]: 2026-01-23 18:33:56.927917842 +0000 UTC m=+0.136591298 container died d7ad03636c088296182a8a5a43ee6c2ad56e660dd60006021cae125b4e708eec (image=quay.io/ceph/ceph:v20, name=sweet_hypatia, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 23 18:33:56 compute-0 podman[74281]: 2026-01-23 18:33:56.966987595 +0000 UTC m=+0.175661051 container remove d7ad03636c088296182a8a5a43ee6c2ad56e660dd60006021cae125b4e708eec (image=quay.io/ceph/ceph:v20, name=sweet_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:33:56 compute-0 systemd[1]: libpod-conmon-d7ad03636c088296182a8a5a43ee6c2ad56e660dd60006021cae125b4e708eec.scope: Deactivated successfully.
Jan 23 18:33:57 compute-0 podman[74316]: 2026-01-23 18:33:57.029812333 +0000 UTC m=+0.043483382 container create 5837d542339ff64f4cf01374cbb4d11d0c5f224b64fd915fc3255384515113dd (image=quay.io/ceph/ceph:v20, name=quirky_matsumoto, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:33:57 compute-0 systemd[1]: Started libpod-conmon-5837d542339ff64f4cf01374cbb4d11d0c5f224b64fd915fc3255384515113dd.scope.
Jan 23 18:33:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:33:57 compute-0 podman[74316]: 2026-01-23 18:33:57.095654236 +0000 UTC m=+0.109325295 container init 5837d542339ff64f4cf01374cbb4d11d0c5f224b64fd915fc3255384515113dd (image=quay.io/ceph/ceph:v20, name=quirky_matsumoto, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 18:33:57 compute-0 podman[74316]: 2026-01-23 18:33:57.101017619 +0000 UTC m=+0.114688658 container start 5837d542339ff64f4cf01374cbb4d11d0c5f224b64fd915fc3255384515113dd (image=quay.io/ceph/ceph:v20, name=quirky_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:33:57 compute-0 podman[74316]: 2026-01-23 18:33:57.010916257 +0000 UTC m=+0.024587326 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:33:57 compute-0 podman[74316]: 2026-01-23 18:33:57.108089983 +0000 UTC m=+0.121761072 container attach 5837d542339ff64f4cf01374cbb4d11d0c5f224b64fd915fc3255384515113dd (image=quay.io/ceph/ceph:v20, name=quirky_matsumoto, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 23 18:33:57 compute-0 quirky_matsumoto[74333]: AQAVv3NpjpZJBxAA7ri1xdDA9y2lwQ3MoXrLjg==
Jan 23 18:33:57 compute-0 systemd[1]: libpod-5837d542339ff64f4cf01374cbb4d11d0c5f224b64fd915fc3255384515113dd.scope: Deactivated successfully.
Jan 23 18:33:57 compute-0 podman[74316]: 2026-01-23 18:33:57.126190008 +0000 UTC m=+0.139861057 container died 5837d542339ff64f4cf01374cbb4d11d0c5f224b64fd915fc3255384515113dd (image=quay.io/ceph/ceph:v20, name=quirky_matsumoto, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 18:33:57 compute-0 podman[74316]: 2026-01-23 18:33:57.173293449 +0000 UTC m=+0.186964498 container remove 5837d542339ff64f4cf01374cbb4d11d0c5f224b64fd915fc3255384515113dd (image=quay.io/ceph/ceph:v20, name=quirky_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 23 18:33:57 compute-0 systemd[1]: libpod-conmon-5837d542339ff64f4cf01374cbb4d11d0c5f224b64fd915fc3255384515113dd.scope: Deactivated successfully.
Jan 23 18:33:57 compute-0 podman[74352]: 2026-01-23 18:33:57.228785647 +0000 UTC m=+0.039429972 container create 95a87f7f1259b276da0ad770461875e1ca27c624f78f8e3ebdfa35cf57009f1a (image=quay.io/ceph/ceph:v20, name=romantic_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 18:33:57 compute-0 systemd[1]: Started libpod-conmon-95a87f7f1259b276da0ad770461875e1ca27c624f78f8e3ebdfa35cf57009f1a.scope.
Jan 23 18:33:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:33:57 compute-0 podman[74352]: 2026-01-23 18:33:57.208585879 +0000 UTC m=+0.019230234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:01 compute-0 podman[74352]: 2026-01-23 18:34:01.469763997 +0000 UTC m=+4.280408312 container init 95a87f7f1259b276da0ad770461875e1ca27c624f78f8e3ebdfa35cf57009f1a (image=quay.io/ceph/ceph:v20, name=romantic_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 23 18:34:01 compute-0 podman[74352]: 2026-01-23 18:34:01.474679548 +0000 UTC m=+4.285323873 container start 95a87f7f1259b276da0ad770461875e1ca27c624f78f8e3ebdfa35cf57009f1a (image=quay.io/ceph/ceph:v20, name=romantic_elbakyan, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 23 18:34:01 compute-0 romantic_elbakyan[74368]: AQAZv3Np57GaHRAACHy1rpH2v9njkLtXxsyjRA==
Jan 23 18:34:01 compute-0 systemd[1]: libpod-95a87f7f1259b276da0ad770461875e1ca27c624f78f8e3ebdfa35cf57009f1a.scope: Deactivated successfully.
Jan 23 18:34:01 compute-0 podman[74352]: 2026-01-23 18:34:01.528951806 +0000 UTC m=+4.339596151 container attach 95a87f7f1259b276da0ad770461875e1ca27c624f78f8e3ebdfa35cf57009f1a (image=quay.io/ceph/ceph:v20, name=romantic_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 23 18:34:01 compute-0 podman[74352]: 2026-01-23 18:34:01.52952608 +0000 UTC m=+4.340170405 container died 95a87f7f1259b276da0ad770461875e1ca27c624f78f8e3ebdfa35cf57009f1a (image=quay.io/ceph/ceph:v20, name=romantic_elbakyan, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 23 18:34:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e55bbbfe7b9764e9d1be23c4605b24a1d6e8c3ea26e6e059f748f20c815274cf-merged.mount: Deactivated successfully.
Jan 23 18:34:01 compute-0 podman[74352]: 2026-01-23 18:34:01.646304808 +0000 UTC m=+4.456949133 container remove 95a87f7f1259b276da0ad770461875e1ca27c624f78f8e3ebdfa35cf57009f1a (image=quay.io/ceph/ceph:v20, name=romantic_elbakyan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 23 18:34:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 18:34:01 compute-0 systemd[1]: libpod-conmon-95a87f7f1259b276da0ad770461875e1ca27c624f78f8e3ebdfa35cf57009f1a.scope: Deactivated successfully.
Jan 23 18:34:01 compute-0 podman[74389]: 2026-01-23 18:34:01.71048585 +0000 UTC m=+0.042619402 container create 8b245b519da408fc5e8ebf0882cc92c5cbbb58e18df05ea2ce0712a92894d49f (image=quay.io/ceph/ceph:v20, name=funny_babbage, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 23 18:34:01 compute-0 systemd[1]: Started libpod-conmon-8b245b519da408fc5e8ebf0882cc92c5cbbb58e18df05ea2ce0712a92894d49f.scope.
Jan 23 18:34:01 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bcf1f34cf1c0c44c44bc832eeca2a17a569de2ab518b4a187c5e2de773ed013/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:01 compute-0 podman[74389]: 2026-01-23 18:34:01.771043763 +0000 UTC m=+0.103177335 container init 8b245b519da408fc5e8ebf0882cc92c5cbbb58e18df05ea2ce0712a92894d49f (image=quay.io/ceph/ceph:v20, name=funny_babbage, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:01 compute-0 podman[74389]: 2026-01-23 18:34:01.776320603 +0000 UTC m=+0.108454155 container start 8b245b519da408fc5e8ebf0882cc92c5cbbb58e18df05ea2ce0712a92894d49f (image=quay.io/ceph/ceph:v20, name=funny_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:01 compute-0 podman[74389]: 2026-01-23 18:34:01.780290591 +0000 UTC m=+0.112424243 container attach 8b245b519da408fc5e8ebf0882cc92c5cbbb58e18df05ea2ce0712a92894d49f (image=quay.io/ceph/ceph:v20, name=funny_babbage, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 18:34:01 compute-0 podman[74389]: 2026-01-23 18:34:01.689520924 +0000 UTC m=+0.021654496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:01 compute-0 funny_babbage[74405]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 23 18:34:01 compute-0 funny_babbage[74405]: setting min_mon_release = tentacle
Jan 23 18:34:01 compute-0 funny_babbage[74405]: /usr/bin/monmaptool: set fsid to 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:01 compute-0 funny_babbage[74405]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 23 18:34:01 compute-0 systemd[1]: libpod-8b245b519da408fc5e8ebf0882cc92c5cbbb58e18df05ea2ce0712a92894d49f.scope: Deactivated successfully.
Jan 23 18:34:01 compute-0 podman[74389]: 2026-01-23 18:34:01.806777213 +0000 UTC m=+0.138910785 container died 8b245b519da408fc5e8ebf0882cc92c5cbbb58e18df05ea2ce0712a92894d49f (image=quay.io/ceph/ceph:v20, name=funny_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:01 compute-0 podman[74389]: 2026-01-23 18:34:01.851535056 +0000 UTC m=+0.183668608 container remove 8b245b519da408fc5e8ebf0882cc92c5cbbb58e18df05ea2ce0712a92894d49f (image=quay.io/ceph/ceph:v20, name=funny_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:01 compute-0 systemd[1]: libpod-conmon-8b245b519da408fc5e8ebf0882cc92c5cbbb58e18df05ea2ce0712a92894d49f.scope: Deactivated successfully.
Jan 23 18:34:01 compute-0 podman[74424]: 2026-01-23 18:34:01.910477448 +0000 UTC m=+0.038971780 container create 11642d11f8ceebc7ae5aab039a25ee114ae85ebb45df429888d34df5ea3e86e9 (image=quay.io/ceph/ceph:v20, name=pensive_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:01 compute-0 systemd[1]: Started libpod-conmon-11642d11f8ceebc7ae5aab039a25ee114ae85ebb45df429888d34df5ea3e86e9.scope.
Jan 23 18:34:01 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa47c5b430b949efdc6950daee1e864d27895134f48ebbd911109dd441359775/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa47c5b430b949efdc6950daee1e864d27895134f48ebbd911109dd441359775/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa47c5b430b949efdc6950daee1e864d27895134f48ebbd911109dd441359775/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa47c5b430b949efdc6950daee1e864d27895134f48ebbd911109dd441359775/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:01 compute-0 podman[74424]: 2026-01-23 18:34:01.977024189 +0000 UTC m=+0.105518541 container init 11642d11f8ceebc7ae5aab039a25ee114ae85ebb45df429888d34df5ea3e86e9 (image=quay.io/ceph/ceph:v20, name=pensive_booth, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:34:01 compute-0 podman[74424]: 2026-01-23 18:34:01.981748185 +0000 UTC m=+0.110242517 container start 11642d11f8ceebc7ae5aab039a25ee114ae85ebb45df429888d34df5ea3e86e9 (image=quay.io/ceph/ceph:v20, name=pensive_booth, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:01 compute-0 podman[74424]: 2026-01-23 18:34:01.985314993 +0000 UTC m=+0.113809335 container attach 11642d11f8ceebc7ae5aab039a25ee114ae85ebb45df429888d34df5ea3e86e9 (image=quay.io/ceph/ceph:v20, name=pensive_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 23 18:34:01 compute-0 podman[74424]: 2026-01-23 18:34:01.890471066 +0000 UTC m=+0.018965418 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:02 compute-0 systemd[1]: libpod-11642d11f8ceebc7ae5aab039a25ee114ae85ebb45df429888d34df5ea3e86e9.scope: Deactivated successfully.
Jan 23 18:34:02 compute-0 podman[74424]: 2026-01-23 18:34:02.093643823 +0000 UTC m=+0.222138165 container died 11642d11f8ceebc7ae5aab039a25ee114ae85ebb45df429888d34df5ea3e86e9 (image=quay.io/ceph/ceph:v20, name=pensive_booth, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:02 compute-0 podman[74424]: 2026-01-23 18:34:02.136027008 +0000 UTC m=+0.264521340 container remove 11642d11f8ceebc7ae5aab039a25ee114ae85ebb45df429888d34df5ea3e86e9 (image=quay.io/ceph/ceph:v20, name=pensive_booth, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 18:34:02 compute-0 systemd[1]: libpod-conmon-11642d11f8ceebc7ae5aab039a25ee114ae85ebb45df429888d34df5ea3e86e9.scope: Deactivated successfully.
Jan 23 18:34:02 compute-0 systemd[1]: Reloading.
Jan 23 18:34:02 compute-0 systemd-rc-local-generator[74511]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:34:02 compute-0 systemd-sysv-generator[74514]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:34:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bcf1f34cf1c0c44c44bc832eeca2a17a569de2ab518b4a187c5e2de773ed013-merged.mount: Deactivated successfully.
Jan 23 18:34:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 18:34:02 compute-0 systemd[1]: Reloading.
Jan 23 18:34:02 compute-0 systemd-rc-local-generator[74548]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:34:02 compute-0 systemd-sysv-generator[74551]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:34:02 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Jan 23 18:34:02 compute-0 systemd[1]: Reloading.
Jan 23 18:34:02 compute-0 systemd-rc-local-generator[74585]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:34:02 compute-0 systemd-sysv-generator[74589]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:34:02 compute-0 systemd[1]: Reached target Ceph cluster 4da2adac-d096-5ead-9ec5-3108259ba9e4.
Jan 23 18:34:02 compute-0 systemd[1]: Reloading.
Jan 23 18:34:03 compute-0 systemd-rc-local-generator[74622]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:34:03 compute-0 systemd-sysv-generator[74627]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:34:03 compute-0 systemd[1]: Reloading.
Jan 23 18:34:03 compute-0 systemd-rc-local-generator[74663]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:34:03 compute-0 systemd-sysv-generator[74667]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:34:03 compute-0 systemd[1]: Created slice Slice /system/ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4.
Jan 23 18:34:03 compute-0 systemd[1]: Reached target System Time Set.
Jan 23 18:34:03 compute-0 systemd[1]: Reached target System Time Synchronized.
Jan 23 18:34:03 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 4da2adac-d096-5ead-9ec5-3108259ba9e4...
Jan 23 18:34:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 18:34:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 18:34:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 18:34:03 compute-0 podman[74721]: 2026-01-23 18:34:03.707383934 +0000 UTC m=+0.019915841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:03 compute-0 podman[74721]: 2026-01-23 18:34:03.812613928 +0000 UTC m=+0.125145815 container create dc1fee4bf2fc92be1cdab540c8aa667e7720d29033c633a19ec808101c7d2f5d (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 18:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0d2ebf2899b117bc170a1e1dfd5ebb8669b3fd88cd087c2ee9ad58e97bca26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0d2ebf2899b117bc170a1e1dfd5ebb8669b3fd88cd087c2ee9ad58e97bca26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0d2ebf2899b117bc170a1e1dfd5ebb8669b3fd88cd087c2ee9ad58e97bca26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0d2ebf2899b117bc170a1e1dfd5ebb8669b3fd88cd087c2ee9ad58e97bca26/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:04 compute-0 podman[74721]: 2026-01-23 18:34:04.059514362 +0000 UTC m=+0.372046279 container init dc1fee4bf2fc92be1cdab540c8aa667e7720d29033c633a19ec808101c7d2f5d (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 23 18:34:04 compute-0 podman[74721]: 2026-01-23 18:34:04.064972037 +0000 UTC m=+0.377503924 container start dc1fee4bf2fc92be1cdab540c8aa667e7720d29033c633a19ec808101c7d2f5d (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:04 compute-0 bash[74721]: dc1fee4bf2fc92be1cdab540c8aa667e7720d29033c633a19ec808101c7d2f5d
Jan 23 18:34:04 compute-0 systemd[1]: Started Ceph mon.compute-0 for 4da2adac-d096-5ead-9ec5-3108259ba9e4.
Jan 23 18:34:04 compute-0 ceph-mon[74740]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 18:34:04 compute-0 ceph-mon[74740]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 23 18:34:04 compute-0 ceph-mon[74740]: pidfile_write: ignore empty --pid-file
Jan 23 18:34:04 compute-0 ceph-mon[74740]: load: jerasure load: lrc 
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: RocksDB version: 7.9.2
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Git sha 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: DB SUMMARY
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: DB Session ID:  KW5MRVHNH49RR7OC39HB
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: CURRENT file:  CURRENT
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: IDENTITY file:  IDENTITY
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                         Options.error_if_exists: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                       Options.create_if_missing: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                         Options.paranoid_checks: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                                     Options.env: 0x560e185ff440
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                                Options.info_log: 0x560e196893e0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                Options.max_file_opening_threads: 16
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                              Options.statistics: (nil)
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                               Options.use_fsync: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                       Options.max_log_file_size: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                         Options.allow_fallocate: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                        Options.use_direct_reads: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:          Options.create_missing_column_families: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                              Options.db_log_dir: 
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                                 Options.wal_dir: 
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                   Options.advise_random_on_open: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                    Options.write_buffer_manager: 0x560e19608140
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                            Options.rate_limiter: (nil)
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                  Options.unordered_write: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                               Options.row_cache: None
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                              Options.wal_filter: None
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.allow_ingest_behind: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.two_write_queues: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.manual_wal_flush: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.wal_compression: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.atomic_flush: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                 Options.log_readahead_size: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.allow_data_in_errors: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.db_host_id: __hostname__
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.max_background_jobs: 2
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.max_background_compactions: -1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.max_subcompactions: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.max_total_wal_size: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                          Options.max_open_files: -1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                          Options.bytes_per_sync: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:       Options.compaction_readahead_size: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                  Options.max_background_flushes: -1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Compression algorithms supported:
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         kZSTD supported: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         kXpressCompression supported: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         kBZip2Compression supported: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         kLZ4Compression supported: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         kZlibCompression supported: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         kLZ4HCCompression supported: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         kSnappyCompression supported: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:           Options.merge_operator: 
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:        Options.compaction_filter: None
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560e19614600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560e195f98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:        Options.write_buffer_size: 33554432
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:  Options.max_write_buffer_number: 2
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:          Options.compression: NoCompression
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.num_levels: 7
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1f6198cf-79c4-4079-bac9-29a7a92f17c9
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193244115901, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193244179709, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "KW5MRVHNH49RR7OC39HB", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193244180093, "job": 1, "event": "recovery_finished"}
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 23 18:34:04 compute-0 podman[74762]: 2026-01-23 18:34:04.301082696 +0000 UTC m=+0.102547898 container create 1a4808034f7b739490c09ddf9a27bbd455855714b2d3457cac2fbe78ef2dc708 (image=quay.io/ceph/ceph:v20, name=peaceful_feynman, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560e19626e00
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: DB pointer 0x560e19772000
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 18:34:04 compute-0 ceph-mon[74740]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.064       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.064       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.064       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.064       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560e195f98d0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 23 18:34:04 compute-0 ceph-mon[74740]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@-1(???) e0 preinit fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:04 compute-0 podman[74762]: 2026-01-23 18:34:04.221474094 +0000 UTC m=+0.022939306 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 23 18:34:04 compute-0 ceph-mon[74740]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 18:34:04 compute-0 ceph-mon[74740]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 23 18:34:04 compute-0 ceph-mon[74740]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 23 18:34:04 compute-0 systemd[1]: Started libpod-conmon-1a4808034f7b739490c09ddf9a27bbd455855714b2d3457cac2fbe78ef2dc708.scope.
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 18:34:04 compute-0 ceph-mon[74740]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 23 18:34:04 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/467b12257a476318265485ecdf6e53fe81c11c380205315ea94c990a71fc9ec0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/467b12257a476318265485ecdf6e53fe81c11c380205315ea94c990a71fc9ec0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/467b12257a476318265485ecdf6e53fe81c11c380205315ea94c990a71fc9ec0/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:04 compute-0 ceph-mon[74740]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: log_channel(cluster) log [DBG] : fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:04 compute-0 ceph-mon[74740]: log_channel(cluster) log [DBG] : last_changed 2026-01-23T18:34:01.802781+0000
Jan 23 18:34:04 compute-0 ceph-mon[74740]: log_channel(cluster) log [DBG] : created 2026-01-23T18:34:01.802781+0000
Jan 23 18:34:04 compute-0 ceph-mon[74740]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 23 18:34:04 compute-0 ceph-mon[74740]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-01-23T18:34:02.020446Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864308,os=Linux}
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Jan 23 18:34:04 compute-0 podman[74762]: 2026-01-23 18:34:04.468973944 +0000 UTC m=+0.270439166 container init 1a4808034f7b739490c09ddf9a27bbd455855714b2d3457cac2fbe78ef2dc708 (image=quay.io/ceph/ceph:v20, name=peaceful_feynman, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 18:34:04 compute-0 podman[74762]: 2026-01-23 18:34:04.476084239 +0000 UTC m=+0.277549441 container start 1a4808034f7b739490c09ddf9a27bbd455855714b2d3457cac2fbe78ef2dc708 (image=quay.io/ceph/ceph:v20, name=peaceful_feynman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).mds e1 new map
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2026-01-23T18:34:04:461111+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 23 18:34:04 compute-0 podman[74762]: 2026-01-23 18:34:04.481297098 +0000 UTC m=+0.282762300 container attach 1a4808034f7b739490c09ddf9a27bbd455855714b2d3457cac2fbe78ef2dc708 (image=quay.io/ceph/ceph:v20, name=peaceful_feynman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:04 compute-0 ceph-mon[74740]: log_channel(cluster) log [DBG] : fsmap 
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mkfs 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 23 18:34:04 compute-0 ceph-mon[74740]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 23 18:34:04 compute-0 ceph-mon[74740]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader) e1 handle_auth_request failed to assign global_id
Jan 23 18:34:04 compute-0 ceph-mon[74740]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 23 18:34:04 compute-0 ceph-mon[74740]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3551547133' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 23 18:34:04 compute-0 peaceful_feynman[74796]:   cluster:
Jan 23 18:34:04 compute-0 peaceful_feynman[74796]:     id:     4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:04 compute-0 peaceful_feynman[74796]:     health: HEALTH_OK
Jan 23 18:34:04 compute-0 peaceful_feynman[74796]:  
Jan 23 18:34:04 compute-0 peaceful_feynman[74796]:   services:
Jan 23 18:34:04 compute-0 peaceful_feynman[74796]:     mon: 1 daemons, quorum compute-0 (age 0.531646s) [leader: compute-0]
Jan 23 18:34:04 compute-0 peaceful_feynman[74796]:     mgr: no daemons active
Jan 23 18:34:04 compute-0 peaceful_feynman[74796]:     osd: 0 osds: 0 up, 0 in
Jan 23 18:34:04 compute-0 peaceful_feynman[74796]:  
Jan 23 18:34:04 compute-0 peaceful_feynman[74796]:   data:
Jan 23 18:34:04 compute-0 peaceful_feynman[74796]:     pools:   0 pools, 0 pgs
Jan 23 18:34:04 compute-0 peaceful_feynman[74796]:     objects: 0 objects, 0 B
Jan 23 18:34:04 compute-0 peaceful_feynman[74796]:     usage:   0 B used, 0 B / 0 B avail
Jan 23 18:34:04 compute-0 peaceful_feynman[74796]:     pgs:     
Jan 23 18:34:04 compute-0 peaceful_feynman[74796]:  
Jan 23 18:34:04 compute-0 systemd[1]: libpod-1a4808034f7b739490c09ddf9a27bbd455855714b2d3457cac2fbe78ef2dc708.scope: Deactivated successfully.
Jan 23 18:34:04 compute-0 podman[74762]: 2026-01-23 18:34:04.89282337 +0000 UTC m=+0.694288592 container died 1a4808034f7b739490c09ddf9a27bbd455855714b2d3457cac2fbe78ef2dc708 (image=quay.io/ceph/ceph:v20, name=peaceful_feynman, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 18:34:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-467b12257a476318265485ecdf6e53fe81c11c380205315ea94c990a71fc9ec0-merged.mount: Deactivated successfully.
Jan 23 18:34:05 compute-0 podman[74762]: 2026-01-23 18:34:05.169891418 +0000 UTC m=+0.971356620 container remove 1a4808034f7b739490c09ddf9a27bbd455855714b2d3457cac2fbe78ef2dc708 (image=quay.io/ceph/ceph:v20, name=peaceful_feynman, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 18:34:05 compute-0 podman[74835]: 2026-01-23 18:34:05.224965296 +0000 UTC m=+0.036036839 container create a6f232b2ee5ee95d357f66d9981c55baddf6e84abac4e21926e23738d4b2cc66 (image=quay.io/ceph/ceph:v20, name=magical_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 18:34:05 compute-0 systemd[1]: Started libpod-conmon-a6f232b2ee5ee95d357f66d9981c55baddf6e84abac4e21926e23738d4b2cc66.scope.
Jan 23 18:34:05 compute-0 systemd[1]: libpod-conmon-1a4808034f7b739490c09ddf9a27bbd455855714b2d3457cac2fbe78ef2dc708.scope: Deactivated successfully.
Jan 23 18:34:05 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0a1ce437f1f021e19af5c5fe30e47d8c523f677b072b6bfd47905cff1c4311c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0a1ce437f1f021e19af5c5fe30e47d8c523f677b072b6bfd47905cff1c4311c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0a1ce437f1f021e19af5c5fe30e47d8c523f677b072b6bfd47905cff1c4311c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0a1ce437f1f021e19af5c5fe30e47d8c523f677b072b6bfd47905cff1c4311c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:05 compute-0 podman[74835]: 2026-01-23 18:34:05.29613359 +0000 UTC m=+0.107205153 container init a6f232b2ee5ee95d357f66d9981c55baddf6e84abac4e21926e23738d4b2cc66 (image=quay.io/ceph/ceph:v20, name=magical_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 18:34:05 compute-0 podman[74835]: 2026-01-23 18:34:05.20891656 +0000 UTC m=+0.019988123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:05 compute-0 podman[74835]: 2026-01-23 18:34:05.305283555 +0000 UTC m=+0.116355098 container start a6f232b2ee5ee95d357f66d9981c55baddf6e84abac4e21926e23738d4b2cc66 (image=quay.io/ceph/ceph:v20, name=magical_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:05 compute-0 podman[74835]: 2026-01-23 18:34:05.308742261 +0000 UTC m=+0.119813804 container attach a6f232b2ee5ee95d357f66d9981c55baddf6e84abac4e21926e23738d4b2cc66 (image=quay.io/ceph/ceph:v20, name=magical_chandrasekhar, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 18:34:05 compute-0 ceph-mon[74740]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 23 18:34:05 compute-0 ceph-mon[74740]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/41457114' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 23 18:34:05 compute-0 ceph-mon[74740]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/41457114' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 23 18:34:05 compute-0 magical_chandrasekhar[74852]: 
Jan 23 18:34:05 compute-0 magical_chandrasekhar[74852]: [global]
Jan 23 18:34:05 compute-0 magical_chandrasekhar[74852]:         fsid = 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:05 compute-0 magical_chandrasekhar[74852]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 23 18:34:05 compute-0 magical_chandrasekhar[74852]:         osd_crush_chooseleaf_type = 0
Jan 23 18:34:05 compute-0 ceph-mon[74740]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 23 18:34:05 compute-0 ceph-mon[74740]: monmap epoch 1
Jan 23 18:34:05 compute-0 ceph-mon[74740]: fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:05 compute-0 ceph-mon[74740]: last_changed 2026-01-23T18:34:01.802781+0000
Jan 23 18:34:05 compute-0 ceph-mon[74740]: created 2026-01-23T18:34:01.802781+0000
Jan 23 18:34:05 compute-0 ceph-mon[74740]: min_mon_release 20 (tentacle)
Jan 23 18:34:05 compute-0 ceph-mon[74740]: election_strategy: 1
Jan 23 18:34:05 compute-0 ceph-mon[74740]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 23 18:34:05 compute-0 ceph-mon[74740]: fsmap 
Jan 23 18:34:05 compute-0 ceph-mon[74740]: osdmap e1: 0 total, 0 up, 0 in
Jan 23 18:34:05 compute-0 ceph-mon[74740]: mgrmap e1: no daemons active
Jan 23 18:34:05 compute-0 ceph-mon[74740]: from='client.? 192.168.122.100:0/3551547133' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 23 18:34:05 compute-0 ceph-mon[74740]: from='client.? 192.168.122.100:0/41457114' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 23 18:34:05 compute-0 ceph-mon[74740]: from='client.? 192.168.122.100:0/41457114' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 23 18:34:05 compute-0 systemd[1]: libpod-a6f232b2ee5ee95d357f66d9981c55baddf6e84abac4e21926e23738d4b2cc66.scope: Deactivated successfully.
Jan 23 18:34:05 compute-0 podman[74835]: 2026-01-23 18:34:05.5197158 +0000 UTC m=+0.330787403 container died a6f232b2ee5ee95d357f66d9981c55baddf6e84abac4e21926e23738d4b2cc66 (image=quay.io/ceph/ceph:v20, name=magical_chandrasekhar, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0a1ce437f1f021e19af5c5fe30e47d8c523f677b072b6bfd47905cff1c4311c-merged.mount: Deactivated successfully.
Jan 23 18:34:05 compute-0 podman[74835]: 2026-01-23 18:34:05.573297661 +0000 UTC m=+0.384369204 container remove a6f232b2ee5ee95d357f66d9981c55baddf6e84abac4e21926e23738d4b2cc66 (image=quay.io/ceph/ceph:v20, name=magical_chandrasekhar, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 18:34:05 compute-0 systemd[1]: libpod-conmon-a6f232b2ee5ee95d357f66d9981c55baddf6e84abac4e21926e23738d4b2cc66.scope: Deactivated successfully.
Jan 23 18:34:05 compute-0 podman[74892]: 2026-01-23 18:34:05.62967078 +0000 UTC m=+0.039130316 container create 487bd33745b7200683b2ce33f12064ae9d4b57657b0a3b05c5e649c2a56f42df (image=quay.io/ceph/ceph:v20, name=goofy_shirley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 18:34:05 compute-0 systemd[1]: Started libpod-conmon-487bd33745b7200683b2ce33f12064ae9d4b57657b0a3b05c5e649c2a56f42df.scope.
Jan 23 18:34:05 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79885bba07eb5bb3ee3ba703286245d854c7595ab0345e60ea5f73cd4243e071/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79885bba07eb5bb3ee3ba703286245d854c7595ab0345e60ea5f73cd4243e071/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79885bba07eb5bb3ee3ba703286245d854c7595ab0345e60ea5f73cd4243e071/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79885bba07eb5bb3ee3ba703286245d854c7595ab0345e60ea5f73cd4243e071/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:05 compute-0 podman[74892]: 2026-01-23 18:34:05.698492756 +0000 UTC m=+0.107952302 container init 487bd33745b7200683b2ce33f12064ae9d4b57657b0a3b05c5e649c2a56f42df (image=quay.io/ceph/ceph:v20, name=goofy_shirley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:05 compute-0 podman[74892]: 2026-01-23 18:34:05.706059252 +0000 UTC m=+0.115518778 container start 487bd33745b7200683b2ce33f12064ae9d4b57657b0a3b05c5e649c2a56f42df (image=quay.io/ceph/ceph:v20, name=goofy_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 23 18:34:05 compute-0 podman[74892]: 2026-01-23 18:34:05.612332942 +0000 UTC m=+0.021792488 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:05 compute-0 podman[74892]: 2026-01-23 18:34:05.709713142 +0000 UTC m=+0.119172798 container attach 487bd33745b7200683b2ce33f12064ae9d4b57657b0a3b05c5e649c2a56f42df (image=quay.io/ceph/ceph:v20, name=goofy_shirley, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:34:05 compute-0 ceph-mon[74740]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:34:05 compute-0 ceph-mon[74740]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1687895776' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:34:05 compute-0 systemd[1]: libpod-487bd33745b7200683b2ce33f12064ae9d4b57657b0a3b05c5e649c2a56f42df.scope: Deactivated successfully.
Jan 23 18:34:05 compute-0 podman[74892]: 2026-01-23 18:34:05.893902002 +0000 UTC m=+0.303361528 container died 487bd33745b7200683b2ce33f12064ae9d4b57657b0a3b05c5e649c2a56f42df (image=quay.io/ceph/ceph:v20, name=goofy_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-79885bba07eb5bb3ee3ba703286245d854c7595ab0345e60ea5f73cd4243e071-merged.mount: Deactivated successfully.
Jan 23 18:34:05 compute-0 podman[74892]: 2026-01-23 18:34:05.943402542 +0000 UTC m=+0.352862078 container remove 487bd33745b7200683b2ce33f12064ae9d4b57657b0a3b05c5e649c2a56f42df (image=quay.io/ceph/ceph:v20, name=goofy_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:05 compute-0 systemd[1]: libpod-conmon-487bd33745b7200683b2ce33f12064ae9d4b57657b0a3b05c5e649c2a56f42df.scope: Deactivated successfully.
Jan 23 18:34:05 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 4da2adac-d096-5ead-9ec5-3108259ba9e4...
Jan 23 18:34:06 compute-0 ceph-mon[74740]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 23 18:34:06 compute-0 ceph-mon[74740]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 23 18:34:06 compute-0 ceph-mon[74740]: mon.compute-0@0(leader) e1 shutdown
Jan 23 18:34:06 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0[74736]: 2026-01-23T18:34:06.123+0000 7ff26ecb6640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 23 18:34:06 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0[74736]: 2026-01-23T18:34:06.123+0000 7ff26ecb6640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 23 18:34:06 compute-0 ceph-mon[74740]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 23 18:34:06 compute-0 ceph-mon[74740]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 23 18:34:06 compute-0 podman[74975]: 2026-01-23 18:34:06.286422526 +0000 UTC m=+0.199212031 container died dc1fee4bf2fc92be1cdab540c8aa667e7720d29033c633a19ec808101c7d2f5d (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b0d2ebf2899b117bc170a1e1dfd5ebb8669b3fd88cd087c2ee9ad58e97bca26-merged.mount: Deactivated successfully.
Jan 23 18:34:06 compute-0 podman[74975]: 2026-01-23 18:34:06.323021148 +0000 UTC m=+0.235810653 container remove dc1fee4bf2fc92be1cdab540c8aa667e7720d29033c633a19ec808101c7d2f5d (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:06 compute-0 bash[74975]: ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0
Jan 23 18:34:06 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 18:34:06 compute-0 systemd[1]: ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4@mon.compute-0.service: Deactivated successfully.
Jan 23 18:34:06 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 4da2adac-d096-5ead-9ec5-3108259ba9e4.
Jan 23 18:34:06 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 4da2adac-d096-5ead-9ec5-3108259ba9e4...
Jan 23 18:34:06 compute-0 podman[75078]: 2026-01-23 18:34:06.619971056 +0000 UTC m=+0.038858949 container create 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ed2dd53d7d4c3b91a1d7cec0e24a61686a94fd31baea2e586c35f3287b8d06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ed2dd53d7d4c3b91a1d7cec0e24a61686a94fd31baea2e586c35f3287b8d06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ed2dd53d7d4c3b91a1d7cec0e24a61686a94fd31baea2e586c35f3287b8d06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ed2dd53d7d4c3b91a1d7cec0e24a61686a94fd31baea2e586c35f3287b8d06/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:06 compute-0 podman[75078]: 2026-01-23 18:34:06.665233291 +0000 UTC m=+0.084121194 container init 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 23 18:34:06 compute-0 podman[75078]: 2026-01-23 18:34:06.672623964 +0000 UTC m=+0.091511857 container start 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 23 18:34:06 compute-0 bash[75078]: 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77
Jan 23 18:34:06 compute-0 podman[75078]: 2026-01-23 18:34:06.602466115 +0000 UTC m=+0.021354028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:06 compute-0 systemd[1]: Started Ceph mon.compute-0 for 4da2adac-d096-5ead-9ec5-3108259ba9e4.
Jan 23 18:34:06 compute-0 ceph-mon[75097]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 18:34:06 compute-0 ceph-mon[75097]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 23 18:34:06 compute-0 ceph-mon[75097]: pidfile_write: ignore empty --pid-file
Jan 23 18:34:06 compute-0 ceph-mon[75097]: load: jerasure load: lrc 
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: RocksDB version: 7.9.2
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Git sha 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: DB SUMMARY
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: DB Session ID:  95FKTO3MTV4ZWA92OP2M
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: CURRENT file:  CURRENT
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: IDENTITY file:  IDENTITY
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60223 ; 
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                         Options.error_if_exists: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                       Options.create_if_missing: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                         Options.paranoid_checks: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                                     Options.env: 0x56394345b440
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                                Options.info_log: 0x5639441a3e80
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                Options.max_file_opening_threads: 16
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                              Options.statistics: (nil)
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                               Options.use_fsync: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                       Options.max_log_file_size: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                         Options.allow_fallocate: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                        Options.use_direct_reads: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:          Options.create_missing_column_families: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                              Options.db_log_dir: 
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                                 Options.wal_dir: 
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                   Options.advise_random_on_open: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                    Options.write_buffer_manager: 0x5639441ee140
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                            Options.rate_limiter: (nil)
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                  Options.unordered_write: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                               Options.row_cache: None
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                              Options.wal_filter: None
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.allow_ingest_behind: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.two_write_queues: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.manual_wal_flush: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.wal_compression: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.atomic_flush: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                 Options.log_readahead_size: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.allow_data_in_errors: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.db_host_id: __hostname__
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.max_background_jobs: 2
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.max_background_compactions: -1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.max_subcompactions: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.max_total_wal_size: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                          Options.max_open_files: -1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                          Options.bytes_per_sync: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:       Options.compaction_readahead_size: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                  Options.max_background_flushes: -1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Compression algorithms supported:
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         kZSTD supported: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         kXpressCompression supported: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         kBZip2Compression supported: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         kLZ4Compression supported: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         kZlibCompression supported: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         kLZ4HCCompression supported: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         kSnappyCompression supported: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:           Options.merge_operator: 
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:        Options.compaction_filter: None
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639441faa00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5639441df8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:        Options.write_buffer_size: 33554432
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:  Options.max_write_buffer_number: 2
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:          Options.compression: NoCompression
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.num_levels: 7
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1f6198cf-79c4-4079-bac9-29a7a92f17c9
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193246711577, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193246716363, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58422, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55774, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193246, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193246716475, "job": 1, "event": "recovery_finished"}
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56394420ce00
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: DB pointer 0x563944356000
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 18:34:06 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.44 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   60.44 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 2.85 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 2.85 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5639441df8d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 23 18:34:06 compute-0 ceph-mon[75097]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:06 compute-0 ceph-mon[75097]: mon.compute-0@-1(???) e1 preinit fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:06 compute-0 ceph-mon[75097]: mon.compute-0@-1(???).mds e1 new map
Jan 23 18:34:06 compute-0 ceph-mon[75097]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2026-01-23T18:34:04:461111+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 23 18:34:06 compute-0 ceph-mon[75097]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 23 18:34:06 compute-0 ceph-mon[75097]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 23 18:34:06 compute-0 ceph-mon[75097]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 23 18:34:06 compute-0 ceph-mon[75097]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 23 18:34:06 compute-0 ceph-mon[75097]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 23 18:34:06 compute-0 ceph-mon[75097]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 23 18:34:06 compute-0 ceph-mon[75097]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 23 18:34:06 compute-0 ceph-mon[75097]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 23 18:34:06 compute-0 podman[75098]: 2026-01-23 18:34:06.742171678 +0000 UTC m=+0.040871719 container create 4c23d91855db65346cbc78da2c9ff7bc77a4ff24256a76513b682a91edfea860 (image=quay.io/ceph/ceph:v20, name=adoring_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 23 18:34:06 compute-0 ceph-mon[75097]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 18:34:06 compute-0 ceph-mon[75097]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 23 18:34:06 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:06 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : last_changed 2026-01-23T18:34:01.802781+0000
Jan 23 18:34:06 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : created 2026-01-23T18:34:01.802781+0000
Jan 23 18:34:06 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 23 18:34:06 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 18:34:06 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : fsmap 
Jan 23 18:34:06 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 23 18:34:06 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 23 18:34:06 compute-0 systemd[1]: Started libpod-conmon-4c23d91855db65346cbc78da2c9ff7bc77a4ff24256a76513b682a91edfea860.scope.
Jan 23 18:34:06 compute-0 ceph-mon[75097]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 23 18:34:06 compute-0 ceph-mon[75097]: monmap epoch 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:06 compute-0 ceph-mon[75097]: last_changed 2026-01-23T18:34:01.802781+0000
Jan 23 18:34:06 compute-0 ceph-mon[75097]: created 2026-01-23T18:34:01.802781+0000
Jan 23 18:34:06 compute-0 ceph-mon[75097]: min_mon_release 20 (tentacle)
Jan 23 18:34:06 compute-0 ceph-mon[75097]: election_strategy: 1
Jan 23 18:34:06 compute-0 ceph-mon[75097]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 23 18:34:06 compute-0 ceph-mon[75097]: fsmap 
Jan 23 18:34:06 compute-0 ceph-mon[75097]: osdmap e1: 0 total, 0 up, 0 in
Jan 23 18:34:06 compute-0 ceph-mon[75097]: mgrmap e1: no daemons active
Jan 23 18:34:06 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac24d3d1e25d640c9d3f5e9da4050745ed59cbe0fbd252c04f13ff076e2852a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac24d3d1e25d640c9d3f5e9da4050745ed59cbe0fbd252c04f13ff076e2852a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac24d3d1e25d640c9d3f5e9da4050745ed59cbe0fbd252c04f13ff076e2852a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:06 compute-0 podman[75098]: 2026-01-23 18:34:06.725087927 +0000 UTC m=+0.023787988 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:06 compute-0 podman[75098]: 2026-01-23 18:34:06.822905178 +0000 UTC m=+0.121605239 container init 4c23d91855db65346cbc78da2c9ff7bc77a4ff24256a76513b682a91edfea860 (image=quay.io/ceph/ceph:v20, name=adoring_cray, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 23 18:34:06 compute-0 podman[75098]: 2026-01-23 18:34:06.82909091 +0000 UTC m=+0.127790951 container start 4c23d91855db65346cbc78da2c9ff7bc77a4ff24256a76513b682a91edfea860 (image=quay.io/ceph/ceph:v20, name=adoring_cray, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 18:34:06 compute-0 podman[75098]: 2026-01-23 18:34:06.833614482 +0000 UTC m=+0.132314533 container attach 4c23d91855db65346cbc78da2c9ff7bc77a4ff24256a76513b682a91edfea860 (image=quay.io/ceph/ceph:v20, name=adoring_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 23 18:34:07 compute-0 systemd[1]: libpod-4c23d91855db65346cbc78da2c9ff7bc77a4ff24256a76513b682a91edfea860.scope: Deactivated successfully.
Jan 23 18:34:07 compute-0 podman[75098]: 2026-01-23 18:34:07.067635869 +0000 UTC m=+0.366335900 container died 4c23d91855db65346cbc78da2c9ff7bc77a4ff24256a76513b682a91edfea860 (image=quay.io/ceph/ceph:v20, name=adoring_cray, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ac24d3d1e25d640c9d3f5e9da4050745ed59cbe0fbd252c04f13ff076e2852a-merged.mount: Deactivated successfully.
Jan 23 18:34:07 compute-0 podman[75098]: 2026-01-23 18:34:07.974319494 +0000 UTC m=+1.273019535 container remove 4c23d91855db65346cbc78da2c9ff7bc77a4ff24256a76513b682a91edfea860 (image=quay.io/ceph/ceph:v20, name=adoring_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Jan 23 18:34:08 compute-0 systemd[1]: libpod-conmon-4c23d91855db65346cbc78da2c9ff7bc77a4ff24256a76513b682a91edfea860.scope: Deactivated successfully.
Jan 23 18:34:08 compute-0 podman[75191]: 2026-01-23 18:34:08.031695328 +0000 UTC m=+0.033531677 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:08 compute-0 podman[75191]: 2026-01-23 18:34:08.149438599 +0000 UTC m=+0.151274928 container create 8d2c41974be72cfbe158d2100e4389fa0533a8423b7c225c7671c4586cf2c7d5 (image=quay.io/ceph/ceph:v20, name=zen_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 18:34:08 compute-0 systemd[1]: Started libpod-conmon-8d2c41974be72cfbe158d2100e4389fa0533a8423b7c225c7671c4586cf2c7d5.scope.
Jan 23 18:34:08 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21a562d72e4ce21b2b677123823d885176c27d56be2aee4353e65483fdcdd48/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21a562d72e4ce21b2b677123823d885176c27d56be2aee4353e65483fdcdd48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21a562d72e4ce21b2b677123823d885176c27d56be2aee4353e65483fdcdd48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:08 compute-0 podman[75191]: 2026-01-23 18:34:08.22652539 +0000 UTC m=+0.228361699 container init 8d2c41974be72cfbe158d2100e4389fa0533a8423b7c225c7671c4586cf2c7d5 (image=quay.io/ceph/ceph:v20, name=zen_keller, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:08 compute-0 podman[75191]: 2026-01-23 18:34:08.235641995 +0000 UTC m=+0.237478324 container start 8d2c41974be72cfbe158d2100e4389fa0533a8423b7c225c7671c4586cf2c7d5 (image=quay.io/ceph/ceph:v20, name=zen_keller, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:34:08 compute-0 podman[75191]: 2026-01-23 18:34:08.240441663 +0000 UTC m=+0.242277982 container attach 8d2c41974be72cfbe158d2100e4389fa0533a8423b7c225c7671c4586cf2c7d5 (image=quay.io/ceph/ceph:v20, name=zen_keller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 23 18:34:08 compute-0 systemd[1]: libpod-8d2c41974be72cfbe158d2100e4389fa0533a8423b7c225c7671c4586cf2c7d5.scope: Deactivated successfully.
Jan 23 18:34:08 compute-0 podman[75191]: 2026-01-23 18:34:08.467129769 +0000 UTC m=+0.468966088 container died 8d2c41974be72cfbe158d2100e4389fa0533a8423b7c225c7671c4586cf2c7d5 (image=quay.io/ceph/ceph:v20, name=zen_keller, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d21a562d72e4ce21b2b677123823d885176c27d56be2aee4353e65483fdcdd48-merged.mount: Deactivated successfully.
Jan 23 18:34:08 compute-0 podman[75191]: 2026-01-23 18:34:08.567142115 +0000 UTC m=+0.568978404 container remove 8d2c41974be72cfbe158d2100e4389fa0533a8423b7c225c7671c4586cf2c7d5 (image=quay.io/ceph/ceph:v20, name=zen_keller, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:34:08 compute-0 systemd[1]: libpod-conmon-8d2c41974be72cfbe158d2100e4389fa0533a8423b7c225c7671c4586cf2c7d5.scope: Deactivated successfully.
Jan 23 18:34:08 compute-0 systemd[1]: Reloading.
Jan 23 18:34:08 compute-0 systemd-rc-local-generator[75272]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:34:08 compute-0 systemd-sysv-generator[75276]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:34:08 compute-0 systemd[1]: Reloading.
Jan 23 18:34:08 compute-0 systemd-rc-local-generator[75312]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:34:08 compute-0 systemd-sysv-generator[75316]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:34:09 compute-0 systemd[1]: Starting Ceph mgr.compute-0.rszfwe for 4da2adac-d096-5ead-9ec5-3108259ba9e4...
Jan 23 18:34:09 compute-0 podman[75371]: 2026-01-23 18:34:09.335417987 +0000 UTC m=+0.060397410 container create 3aebf20d64874c41b9d90210fbd91ff757b44133fee66e986f651348a186acbc (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5944193482e80bdc232e023a312d91708507bab59c4a22e092092e650b24414/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5944193482e80bdc232e023a312d91708507bab59c4a22e092092e650b24414/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5944193482e80bdc232e023a312d91708507bab59c4a22e092092e650b24414/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5944193482e80bdc232e023a312d91708507bab59c4a22e092092e650b24414/merged/var/lib/ceph/mgr/ceph-compute-0.rszfwe supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:09 compute-0 podman[75371]: 2026-01-23 18:34:09.301081411 +0000 UTC m=+0.026061004 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:09 compute-0 podman[75371]: 2026-01-23 18:34:09.39682461 +0000 UTC m=+0.121804043 container init 3aebf20d64874c41b9d90210fbd91ff757b44133fee66e986f651348a186acbc (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 23 18:34:09 compute-0 podman[75371]: 2026-01-23 18:34:09.407582936 +0000 UTC m=+0.132562349 container start 3aebf20d64874c41b9d90210fbd91ff757b44133fee66e986f651348a186acbc (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 18:34:09 compute-0 bash[75371]: 3aebf20d64874c41b9d90210fbd91ff757b44133fee66e986f651348a186acbc
Jan 23 18:34:09 compute-0 systemd[1]: Started Ceph mgr.compute-0.rszfwe for 4da2adac-d096-5ead-9ec5-3108259ba9e4.
Jan 23 18:34:09 compute-0 ceph-mgr[75390]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 18:34:09 compute-0 ceph-mgr[75390]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 23 18:34:09 compute-0 ceph-mgr[75390]: pidfile_write: ignore empty --pid-file
Jan 23 18:34:09 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'alerts'
Jan 23 18:34:09 compute-0 podman[75391]: 2026-01-23 18:34:09.496418245 +0000 UTC m=+0.047275416 container create 6f01f3404eb56362987bce0c0a093f4ac2386e129826c3d2f57403aa053c7f5a (image=quay.io/ceph/ceph:v20, name=kind_pascal, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:09 compute-0 systemd[1]: Started libpod-conmon-6f01f3404eb56362987bce0c0a093f4ac2386e129826c3d2f57403aa053c7f5a.scope.
Jan 23 18:34:09 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af00e450bf595d0aef5f99d4a30967357b3c383dc676975bf9e8fe3d5b07d816/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af00e450bf595d0aef5f99d4a30967357b3c383dc676975bf9e8fe3d5b07d816/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af00e450bf595d0aef5f99d4a30967357b3c383dc676975bf9e8fe3d5b07d816/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:09 compute-0 podman[75391]: 2026-01-23 18:34:09.479371115 +0000 UTC m=+0.030228296 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:09 compute-0 podman[75391]: 2026-01-23 18:34:09.586541867 +0000 UTC m=+0.137399098 container init 6f01f3404eb56362987bce0c0a093f4ac2386e129826c3d2f57403aa053c7f5a (image=quay.io/ceph/ceph:v20, name=kind_pascal, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 23 18:34:09 compute-0 podman[75391]: 2026-01-23 18:34:09.59401709 +0000 UTC m=+0.144874281 container start 6f01f3404eb56362987bce0c0a093f4ac2386e129826c3d2f57403aa053c7f5a (image=quay.io/ceph/ceph:v20, name=kind_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:34:09 compute-0 podman[75391]: 2026-01-23 18:34:09.598435479 +0000 UTC m=+0.149292700 container attach 6f01f3404eb56362987bce0c0a093f4ac2386e129826c3d2f57403aa053c7f5a (image=quay.io/ceph/ceph:v20, name=kind_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:09 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'balancer'
Jan 23 18:34:09 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'cephadm'
Jan 23 18:34:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 23 18:34:09 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2593063727' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 23 18:34:09 compute-0 kind_pascal[75425]: 
Jan 23 18:34:09 compute-0 kind_pascal[75425]: {
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     "fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     "health": {
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "status": "HEALTH_OK",
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "checks": {},
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "mutes": []
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     },
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     "election_epoch": 5,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     "quorum": [
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         0
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     ],
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     "quorum_names": [
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "compute-0"
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     ],
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     "quorum_age": 3,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     "monmap": {
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "epoch": 1,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "min_mon_release_name": "tentacle",
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "num_mons": 1
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     },
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     "osdmap": {
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "epoch": 1,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "num_osds": 0,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "num_up_osds": 0,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "osd_up_since": 0,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "num_in_osds": 0,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "osd_in_since": 0,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "num_remapped_pgs": 0
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     },
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     "pgmap": {
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "pgs_by_state": [],
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "num_pgs": 0,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "num_pools": 0,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "num_objects": 0,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "data_bytes": 0,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "bytes_used": 0,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "bytes_avail": 0,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "bytes_total": 0
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     },
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     "fsmap": {
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "epoch": 1,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "btime": "2026-01-23T18:34:04:461111+0000",
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "by_rank": [],
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "up:standby": 0
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     },
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     "mgrmap": {
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "available": false,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "num_standbys": 0,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "modules": [
Jan 23 18:34:09 compute-0 kind_pascal[75425]:             "iostat",
Jan 23 18:34:09 compute-0 kind_pascal[75425]:             "nfs"
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         ],
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "services": {}
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     },
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     "servicemap": {
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "epoch": 1,
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "modified": "2026-01-23T18:34:04.464417+0000",
Jan 23 18:34:09 compute-0 kind_pascal[75425]:         "services": {}
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     },
Jan 23 18:34:09 compute-0 kind_pascal[75425]:     "progress_events": {}
Jan 23 18:34:09 compute-0 kind_pascal[75425]: }
Jan 23 18:34:09 compute-0 systemd[1]: libpod-6f01f3404eb56362987bce0c0a093f4ac2386e129826c3d2f57403aa053c7f5a.scope: Deactivated successfully.
Jan 23 18:34:09 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2593063727' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 23 18:34:09 compute-0 podman[75451]: 2026-01-23 18:34:09.904989145 +0000 UTC m=+0.029592711 container died 6f01f3404eb56362987bce0c0a093f4ac2386e129826c3d2f57403aa053c7f5a (image=quay.io/ceph/ceph:v20, name=kind_pascal, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 18:34:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-af00e450bf595d0aef5f99d4a30967357b3c383dc676975bf9e8fe3d5b07d816-merged.mount: Deactivated successfully.
Jan 23 18:34:09 compute-0 podman[75451]: 2026-01-23 18:34:09.939127497 +0000 UTC m=+0.063731003 container remove 6f01f3404eb56362987bce0c0a093f4ac2386e129826c3d2f57403aa053c7f5a (image=quay.io/ceph/ceph:v20, name=kind_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:09 compute-0 systemd[1]: libpod-conmon-6f01f3404eb56362987bce0c0a093f4ac2386e129826c3d2f57403aa053c7f5a.scope: Deactivated successfully.
Jan 23 18:34:10 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'crash'
Jan 23 18:34:10 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'dashboard'
Jan 23 18:34:11 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'devicehealth'
Jan 23 18:34:11 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'diskprediction_local'
Jan 23 18:34:11 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 23 18:34:11 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 23 18:34:11 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]:   from numpy import show_config as show_numpy_config
Jan 23 18:34:11 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'influx'
Jan 23 18:34:11 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'insights'
Jan 23 18:34:11 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'iostat'
Jan 23 18:34:11 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'k8sevents'
Jan 23 18:34:12 compute-0 podman[75477]: 2026-01-23 18:34:12.016650827 +0000 UTC m=+0.047131972 container create 55d34ee5e66d0d6bccdd47427207d602ee651c1f178f143911389d20281ffafe (image=quay.io/ceph/ceph:v20, name=distracted_rubin, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 18:34:12 compute-0 systemd[1]: Started libpod-conmon-55d34ee5e66d0d6bccdd47427207d602ee651c1f178f143911389d20281ffafe.scope.
Jan 23 18:34:12 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:12 compute-0 podman[75477]: 2026-01-23 18:34:11.99526344 +0000 UTC m=+0.025744585 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ecfac362a80c138497166128da6dd03b7509070185f84ced2fe22de44c71e6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ecfac362a80c138497166128da6dd03b7509070185f84ced2fe22de44c71e6c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ecfac362a80c138497166128da6dd03b7509070185f84ced2fe22de44c71e6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:12 compute-0 podman[75477]: 2026-01-23 18:34:12.106606484 +0000 UTC m=+0.137087679 container init 55d34ee5e66d0d6bccdd47427207d602ee651c1f178f143911389d20281ffafe (image=quay.io/ceph/ceph:v20, name=distracted_rubin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 23 18:34:12 compute-0 podman[75477]: 2026-01-23 18:34:12.113697089 +0000 UTC m=+0.144178274 container start 55d34ee5e66d0d6bccdd47427207d602ee651c1f178f143911389d20281ffafe (image=quay.io/ceph/ceph:v20, name=distracted_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:12 compute-0 podman[75477]: 2026-01-23 18:34:12.118778034 +0000 UTC m=+0.149259179 container attach 55d34ee5e66d0d6bccdd47427207d602ee651c1f178f143911389d20281ffafe (image=quay.io/ceph/ceph:v20, name=distracted_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 18:34:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 23 18:34:12 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/481693446' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 23 18:34:12 compute-0 distracted_rubin[75493]: 
Jan 23 18:34:12 compute-0 distracted_rubin[75493]: {
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     "fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     "health": {
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "status": "HEALTH_OK",
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "checks": {},
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "mutes": []
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     },
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     "election_epoch": 5,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     "quorum": [
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         0
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     ],
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     "quorum_names": [
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "compute-0"
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     ],
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     "quorum_age": 5,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     "monmap": {
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "epoch": 1,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "min_mon_release_name": "tentacle",
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "num_mons": 1
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     },
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     "osdmap": {
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "epoch": 1,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "num_osds": 0,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "num_up_osds": 0,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "osd_up_since": 0,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "num_in_osds": 0,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "osd_in_since": 0,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "num_remapped_pgs": 0
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     },
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     "pgmap": {
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "pgs_by_state": [],
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "num_pgs": 0,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "num_pools": 0,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "num_objects": 0,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "data_bytes": 0,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "bytes_used": 0,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "bytes_avail": 0,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "bytes_total": 0
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     },
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     "fsmap": {
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "epoch": 1,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "btime": "2026-01-23T18:34:04:461111+0000",
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "by_rank": [],
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "up:standby": 0
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     },
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     "mgrmap": {
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "available": false,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "num_standbys": 0,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "modules": [
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:             "iostat",
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:             "nfs"
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         ],
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "services": {}
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     },
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     "servicemap": {
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "epoch": 1,
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "modified": "2026-01-23T18:34:04.464417+0000",
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:         "services": {}
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     },
Jan 23 18:34:12 compute-0 distracted_rubin[75493]:     "progress_events": {}
Jan 23 18:34:12 compute-0 distracted_rubin[75493]: }
Jan 23 18:34:12 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'localpool'
Jan 23 18:34:12 compute-0 systemd[1]: libpod-55d34ee5e66d0d6bccdd47427207d602ee651c1f178f143911389d20281ffafe.scope: Deactivated successfully.
Jan 23 18:34:12 compute-0 podman[75477]: 2026-01-23 18:34:12.313259747 +0000 UTC m=+0.343740922 container died 55d34ee5e66d0d6bccdd47427207d602ee651c1f178f143911389d20281ffafe (image=quay.io/ceph/ceph:v20, name=distracted_rubin, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ecfac362a80c138497166128da6dd03b7509070185f84ced2fe22de44c71e6c-merged.mount: Deactivated successfully.
Jan 23 18:34:12 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/481693446' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 23 18:34:12 compute-0 podman[75477]: 2026-01-23 18:34:12.361031624 +0000 UTC m=+0.391512769 container remove 55d34ee5e66d0d6bccdd47427207d602ee651c1f178f143911389d20281ffafe (image=quay.io/ceph/ceph:v20, name=distracted_rubin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:12 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'mds_autoscaler'
Jan 23 18:34:12 compute-0 systemd[1]: libpod-conmon-55d34ee5e66d0d6bccdd47427207d602ee651c1f178f143911389d20281ffafe.scope: Deactivated successfully.
Jan 23 18:34:12 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'mirroring'
Jan 23 18:34:12 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'nfs'
Jan 23 18:34:12 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'orchestrator'
Jan 23 18:34:13 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'osd_perf_query'
Jan 23 18:34:13 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'osd_support'
Jan 23 18:34:13 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'pg_autoscaler'
Jan 23 18:34:13 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'progress'
Jan 23 18:34:13 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'prometheus'
Jan 23 18:34:13 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'rbd_support'
Jan 23 18:34:14 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'rgw'
Jan 23 18:34:14 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'rook'
Jan 23 18:34:14 compute-0 podman[75532]: 2026-01-23 18:34:14.407123961 +0000 UTC m=+0.022943597 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:14 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'selftest'
Jan 23 18:34:15 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'smb'
Jan 23 18:34:15 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'snap_schedule'
Jan 23 18:34:15 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'stats'
Jan 23 18:34:15 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'status'
Jan 23 18:34:15 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'telegraf'
Jan 23 18:34:15 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'telemetry'
Jan 23 18:34:15 compute-0 podman[75532]: 2026-01-23 18:34:15.786929007 +0000 UTC m=+1.402748613 container create f856476469fa01e8d135b6fae0ea72b437109e303b7c4e2f6536f5c84fa516ef (image=quay.io/ceph/ceph:v20, name=wizardly_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:15 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'test_orchestrator'
Jan 23 18:34:15 compute-0 systemd[1]: Started libpod-conmon-f856476469fa01e8d135b6fae0ea72b437109e303b7c4e2f6536f5c84fa516ef.scope.
Jan 23 18:34:15 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1da0dfa90ba75f56955d35eca2c258368d751adfb6bc043bbabe11b109fcff7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1da0dfa90ba75f56955d35eca2c258368d751adfb6bc043bbabe11b109fcff7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1da0dfa90ba75f56955d35eca2c258368d751adfb6bc043bbabe11b109fcff7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'volumes'
Jan 23 18:34:16 compute-0 podman[75532]: 2026-01-23 18:34:16.063978745 +0000 UTC m=+1.679798431 container init f856476469fa01e8d135b6fae0ea72b437109e303b7c4e2f6536f5c84fa516ef (image=quay.io/ceph/ceph:v20, name=wizardly_torvalds, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:16 compute-0 podman[75532]: 2026-01-23 18:34:16.069905601 +0000 UTC m=+1.685725227 container start f856476469fa01e8d135b6fae0ea72b437109e303b7c4e2f6536f5c84fa516ef (image=quay.io/ceph/ceph:v20, name=wizardly_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 18:34:16 compute-0 podman[75532]: 2026-01-23 18:34:16.074419243 +0000 UTC m=+1.690238889 container attach f856476469fa01e8d135b6fae0ea72b437109e303b7c4e2f6536f5c84fa516ef (image=quay.io/ceph/ceph:v20, name=wizardly_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 18:34:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 23 18:34:16 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1910859867' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]: 
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]: {
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     "fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     "health": {
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "status": "HEALTH_OK",
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "checks": {},
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "mutes": []
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     },
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     "election_epoch": 5,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     "quorum": [
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         0
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     ],
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     "quorum_names": [
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "compute-0"
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     ],
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     "quorum_age": 9,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     "monmap": {
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "epoch": 1,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "min_mon_release_name": "tentacle",
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "num_mons": 1
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     },
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     "osdmap": {
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "epoch": 1,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "num_osds": 0,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "num_up_osds": 0,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "osd_up_since": 0,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "num_in_osds": 0,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "osd_in_since": 0,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "num_remapped_pgs": 0
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     },
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     "pgmap": {
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "pgs_by_state": [],
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "num_pgs": 0,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "num_pools": 0,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "num_objects": 0,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "data_bytes": 0,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "bytes_used": 0,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "bytes_avail": 0,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "bytes_total": 0
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     },
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     "fsmap": {
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "epoch": 1,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "btime": "2026-01-23T18:34:04:461111+0000",
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "by_rank": [],
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "up:standby": 0
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     },
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     "mgrmap": {
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "available": false,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "num_standbys": 0,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "modules": [
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:             "iostat",
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:             "nfs"
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         ],
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "services": {}
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     },
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     "servicemap": {
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "epoch": 1,
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "modified": "2026-01-23T18:34:04.464417+0000",
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:         "services": {}
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     },
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]:     "progress_events": {}
Jan 23 18:34:16 compute-0 wizardly_torvalds[75549]: }
Jan 23 18:34:16 compute-0 systemd[1]: libpod-f856476469fa01e8d135b6fae0ea72b437109e303b7c4e2f6536f5c84fa516ef.scope: Deactivated successfully.
Jan 23 18:34:16 compute-0 podman[75532]: 2026-01-23 18:34:16.284693765 +0000 UTC m=+1.900513381 container died f856476469fa01e8d135b6fae0ea72b437109e303b7c4e2f6536f5c84fa516ef (image=quay.io/ceph/ceph:v20, name=wizardly_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1da0dfa90ba75f56955d35eca2c258368d751adfb6bc043bbabe11b109fcff7-merged.mount: Deactivated successfully.
Jan 23 18:34:16 compute-0 podman[75532]: 2026-01-23 18:34:16.3193958 +0000 UTC m=+1.935215406 container remove f856476469fa01e8d135b6fae0ea72b437109e303b7c4e2f6536f5c84fa516ef (image=quay.io/ceph/ceph:v20, name=wizardly_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:16 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1910859867' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 23 18:34:16 compute-0 systemd[1]: libpod-conmon-f856476469fa01e8d135b6fae0ea72b437109e303b7c4e2f6536f5c84fa516ef.scope: Deactivated successfully.
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: ms_deliver_dispatch: unhandled message 0x55e7d1045860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 23 18:34:16 compute-0 ceph-mon[75097]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.rszfwe
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: mgr handle_mgr_map Activating!
Jan 23 18:34:16 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.rszfwe(active, starting, since 0.0235699s)
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: mgr handle_mgr_map I am now activating
Jan 23 18:34:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 23 18:34:16 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mds metadata"} : dispatch
Jan 23 18:34:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).mds e1 all = 1
Jan 23 18:34:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 23 18:34:16 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata"} : dispatch
Jan 23 18:34:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 23 18:34:16 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mon metadata"} : dispatch
Jan 23 18:34:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 23 18:34:16 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 23 18:34:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.rszfwe", "id": "compute-0.rszfwe"} v 0)
Jan 23 18:34:16 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mgr metadata", "who": "compute-0.rszfwe", "id": "compute-0.rszfwe"} : dispatch
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: balancer
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [balancer INFO root] Starting
Jan 23 18:34:16 compute-0 ceph-mon[75097]: log_channel(cluster) log [INF] : Manager daemon compute-0.rszfwe is now available
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: crash
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:34:16
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [balancer INFO root] No pools available
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: devicehealth
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: iostat
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [devicehealth INFO root] Starting
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: nfs
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: orchestrator
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: pg_autoscaler
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: progress
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [progress INFO root] Loading...
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [progress INFO root] No stored events to load
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [progress INFO root] Loaded [] historic events
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [progress INFO root] Loaded OSDMap, ready.
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [rbd_support INFO root] recovery thread starting
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [rbd_support INFO root] starting setup
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: rbd_support
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: status
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: telemetry
Jan 23 18:34:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rszfwe/mirror_snapshot_schedule"} v 0)
Jan 23 18:34:16 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rszfwe/mirror_snapshot_schedule"} : dispatch
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [rbd_support INFO root] PerfHandler: starting
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TaskHandler: starting
Jan 23 18:34:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rszfwe/trash_purge_schedule"} v 0)
Jan 23 18:34:16 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rszfwe/trash_purge_schedule"} : dispatch
Jan 23 18:34:16 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: [rbd_support INFO root] setup complete
Jan 23 18:34:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 23 18:34:16 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 23 18:34:16 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: volumes
Jan 23 18:34:16 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:17 compute-0 ceph-mon[75097]: Activating manager daemon compute-0.rszfwe
Jan 23 18:34:17 compute-0 ceph-mon[75097]: mgrmap e2: compute-0.rszfwe(active, starting, since 0.0235699s)
Jan 23 18:34:17 compute-0 ceph-mon[75097]: from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mds metadata"} : dispatch
Jan 23 18:34:17 compute-0 ceph-mon[75097]: from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata"} : dispatch
Jan 23 18:34:17 compute-0 ceph-mon[75097]: from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mon metadata"} : dispatch
Jan 23 18:34:17 compute-0 ceph-mon[75097]: from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 23 18:34:17 compute-0 ceph-mon[75097]: from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mgr metadata", "who": "compute-0.rszfwe", "id": "compute-0.rszfwe"} : dispatch
Jan 23 18:34:17 compute-0 ceph-mon[75097]: Manager daemon compute-0.rszfwe is now available
Jan 23 18:34:17 compute-0 ceph-mon[75097]: from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rszfwe/mirror_snapshot_schedule"} : dispatch
Jan 23 18:34:17 compute-0 ceph-mon[75097]: from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rszfwe/trash_purge_schedule"} : dispatch
Jan 23 18:34:17 compute-0 ceph-mon[75097]: from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:17 compute-0 ceph-mon[75097]: from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:17 compute-0 ceph-mon[75097]: from='mgr.14102 192.168.122.100:0/2906695117' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:17 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.rszfwe(active, since 1.14083s)
Jan 23 18:34:18 compute-0 ceph-mgr[75390]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 18:34:18 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:34:18 compute-0 podman[75666]: 2026-01-23 18:34:18.367939254 +0000 UTC m=+0.024966412 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:18 compute-0 podman[75666]: 2026-01-23 18:34:18.549528805 +0000 UTC m=+0.206555933 container create 0bbc17b91167f7764311043ff551237eb50f70467abf93dee19db29dd1e8cd86 (image=quay.io/ceph/ceph:v20, name=dreamy_spence, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 23 18:34:18 compute-0 ceph-mon[75097]: mgrmap e3: compute-0.rszfwe(active, since 1.14083s)
Jan 23 18:34:18 compute-0 systemd[1]: Started libpod-conmon-0bbc17b91167f7764311043ff551237eb50f70467abf93dee19db29dd1e8cd86.scope.
Jan 23 18:34:18 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f925a61ae018c8cd0bbaccd0f679a312fbd56dc64be1a16f90ce54d03682ca2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f925a61ae018c8cd0bbaccd0f679a312fbd56dc64be1a16f90ce54d03682ca2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f925a61ae018c8cd0bbaccd0f679a312fbd56dc64be1a16f90ce54d03682ca2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:18 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.rszfwe(active, since 2s)
Jan 23 18:34:18 compute-0 podman[75666]: 2026-01-23 18:34:18.636352206 +0000 UTC m=+0.293379334 container init 0bbc17b91167f7764311043ff551237eb50f70467abf93dee19db29dd1e8cd86 (image=quay.io/ceph/ceph:v20, name=dreamy_spence, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 18:34:18 compute-0 podman[75666]: 2026-01-23 18:34:18.644546821 +0000 UTC m=+0.301573919 container start 0bbc17b91167f7764311043ff551237eb50f70467abf93dee19db29dd1e8cd86 (image=quay.io/ceph/ceph:v20, name=dreamy_spence, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:34:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 23 18:34:19 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1730510005' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 23 18:34:19 compute-0 dreamy_spence[75682]: 
Jan 23 18:34:19 compute-0 dreamy_spence[75682]: {
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     "fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     "health": {
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "status": "HEALTH_OK",
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "checks": {},
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "mutes": []
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     },
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     "election_epoch": 5,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     "quorum": [
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         0
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     ],
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     "quorum_names": [
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "compute-0"
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     ],
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     "quorum_age": 12,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     "monmap": {
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "epoch": 1,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "min_mon_release_name": "tentacle",
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "num_mons": 1
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     },
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     "osdmap": {
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "epoch": 1,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "num_osds": 0,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "num_up_osds": 0,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "osd_up_since": 0,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "num_in_osds": 0,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "osd_in_since": 0,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "num_remapped_pgs": 0
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     },
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     "pgmap": {
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "pgs_by_state": [],
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "num_pgs": 0,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "num_pools": 0,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "num_objects": 0,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "data_bytes": 0,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "bytes_used": 0,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "bytes_avail": 0,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "bytes_total": 0
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     },
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     "fsmap": {
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "epoch": 1,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "btime": "2026-01-23T18:34:04:461111+0000",
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "by_rank": [],
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "up:standby": 0
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     },
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     "mgrmap": {
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "available": true,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "num_standbys": 0,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "modules": [
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:             "iostat",
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:             "nfs"
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         ],
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "services": {}
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     },
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     "servicemap": {
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "epoch": 1,
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "modified": "2026-01-23T18:34:04.464417+0000",
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:         "services": {}
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     },
Jan 23 18:34:19 compute-0 dreamy_spence[75682]:     "progress_events": {}
Jan 23 18:34:19 compute-0 dreamy_spence[75682]: }
Jan 23 18:34:19 compute-0 systemd[1]: libpod-0bbc17b91167f7764311043ff551237eb50f70467abf93dee19db29dd1e8cd86.scope: Deactivated successfully.
Jan 23 18:34:19 compute-0 podman[75666]: 2026-01-23 18:34:19.296963623 +0000 UTC m=+0.953990721 container attach 0bbc17b91167f7764311043ff551237eb50f70467abf93dee19db29dd1e8cd86 (image=quay.io/ceph/ceph:v20, name=dreamy_spence, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 18:34:19 compute-0 podman[75666]: 2026-01-23 18:34:19.297649861 +0000 UTC m=+0.954676979 container died 0bbc17b91167f7764311043ff551237eb50f70467abf93dee19db29dd1e8cd86 (image=quay.io/ceph/ceph:v20, name=dreamy_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f925a61ae018c8cd0bbaccd0f679a312fbd56dc64be1a16f90ce54d03682ca2-merged.mount: Deactivated successfully.
Jan 23 18:34:19 compute-0 podman[75666]: 2026-01-23 18:34:19.52977793 +0000 UTC m=+1.186805028 container remove 0bbc17b91167f7764311043ff551237eb50f70467abf93dee19db29dd1e8cd86 (image=quay.io/ceph/ceph:v20, name=dreamy_spence, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 23 18:34:19 compute-0 systemd[1]: libpod-conmon-0bbc17b91167f7764311043ff551237eb50f70467abf93dee19db29dd1e8cd86.scope: Deactivated successfully.
Jan 23 18:34:19 compute-0 podman[75720]: 2026-01-23 18:34:19.588967324 +0000 UTC m=+0.041481834 container create d53edefd5f2e22548e420f33b2448505a592aae325bf797ab3f6a38d79b3b66e (image=quay.io/ceph/ceph:v20, name=zen_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Jan 23 18:34:19 compute-0 systemd[1]: Started libpod-conmon-d53edefd5f2e22548e420f33b2448505a592aae325bf797ab3f6a38d79b3b66e.scope.
Jan 23 18:34:19 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe5778dc29c3b617a84e458728b36fdf00ea649294368a5e38dbdd6724627c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe5778dc29c3b617a84e458728b36fdf00ea649294368a5e38dbdd6724627c1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe5778dc29c3b617a84e458728b36fdf00ea649294368a5e38dbdd6724627c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe5778dc29c3b617a84e458728b36fdf00ea649294368a5e38dbdd6724627c1/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:19 compute-0 podman[75720]: 2026-01-23 18:34:19.569239223 +0000 UTC m=+0.021753753 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:19 compute-0 ceph-mon[75097]: mgrmap e4: compute-0.rszfwe(active, since 2s)
Jan 23 18:34:19 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1730510005' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 23 18:34:19 compute-0 podman[75720]: 2026-01-23 18:34:19.809335609 +0000 UTC m=+0.261850149 container init d53edefd5f2e22548e420f33b2448505a592aae325bf797ab3f6a38d79b3b66e (image=quay.io/ceph/ceph:v20, name=zen_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 23 18:34:19 compute-0 podman[75720]: 2026-01-23 18:34:19.818246901 +0000 UTC m=+0.270761401 container start d53edefd5f2e22548e420f33b2448505a592aae325bf797ab3f6a38d79b3b66e (image=quay.io/ceph/ceph:v20, name=zen_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 23 18:34:19 compute-0 podman[75720]: 2026-01-23 18:34:19.822864076 +0000 UTC m=+0.275378616 container attach d53edefd5f2e22548e420f33b2448505a592aae325bf797ab3f6a38d79b3b66e (image=quay.io/ceph/ceph:v20, name=zen_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 23 18:34:20 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3883221591' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 23 18:34:20 compute-0 zen_pascal[75736]: 
Jan 23 18:34:20 compute-0 zen_pascal[75736]: [global]
Jan 23 18:34:20 compute-0 zen_pascal[75736]:         fsid = 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:20 compute-0 zen_pascal[75736]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 23 18:34:20 compute-0 zen_pascal[75736]:         osd_crush_chooseleaf_type = 0
Jan 23 18:34:20 compute-0 systemd[1]: libpod-d53edefd5f2e22548e420f33b2448505a592aae325bf797ab3f6a38d79b3b66e.scope: Deactivated successfully.
Jan 23 18:34:20 compute-0 podman[75720]: 2026-01-23 18:34:20.263650992 +0000 UTC m=+0.716165522 container died d53edefd5f2e22548e420f33b2448505a592aae325bf797ab3f6a38d79b3b66e (image=quay.io/ceph/ceph:v20, name=zen_pascal, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 23 18:34:20 compute-0 ceph-mgr[75390]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 18:34:20 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:34:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffe5778dc29c3b617a84e458728b36fdf00ea649294368a5e38dbdd6724627c1-merged.mount: Deactivated successfully.
Jan 23 18:34:20 compute-0 podman[75720]: 2026-01-23 18:34:20.454481252 +0000 UTC m=+0.906995752 container remove d53edefd5f2e22548e420f33b2448505a592aae325bf797ab3f6a38d79b3b66e (image=quay.io/ceph/ceph:v20, name=zen_pascal, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:20 compute-0 systemd[1]: libpod-conmon-d53edefd5f2e22548e420f33b2448505a592aae325bf797ab3f6a38d79b3b66e.scope: Deactivated successfully.
Jan 23 18:34:20 compute-0 podman[75774]: 2026-01-23 18:34:20.526018463 +0000 UTC m=+0.053486752 container create 9ef3e75fca30dbbe528916676b44821ffd65d4ac3be80d04dae510aabc6cc01d (image=quay.io/ceph/ceph:v20, name=frosty_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:20 compute-0 systemd[1]: Started libpod-conmon-9ef3e75fca30dbbe528916676b44821ffd65d4ac3be80d04dae510aabc6cc01d.scope.
Jan 23 18:34:20 compute-0 podman[75774]: 2026-01-23 18:34:20.493915644 +0000 UTC m=+0.021383953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:20 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e66a207bc4d4e208d6d18f177194be7297a9584120d276401a8840f57e679db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e66a207bc4d4e208d6d18f177194be7297a9584120d276401a8840f57e679db/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e66a207bc4d4e208d6d18f177194be7297a9584120d276401a8840f57e679db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:20 compute-0 podman[75774]: 2026-01-23 18:34:20.647102137 +0000 UTC m=+0.174570436 container init 9ef3e75fca30dbbe528916676b44821ffd65d4ac3be80d04dae510aabc6cc01d (image=quay.io/ceph/ceph:v20, name=frosty_shamir, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:20 compute-0 podman[75774]: 2026-01-23 18:34:20.657294222 +0000 UTC m=+0.184762541 container start 9ef3e75fca30dbbe528916676b44821ffd65d4ac3be80d04dae510aabc6cc01d (image=quay.io/ceph/ceph:v20, name=frosty_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 18:34:20 compute-0 podman[75774]: 2026-01-23 18:34:20.662765128 +0000 UTC m=+0.190233427 container attach 9ef3e75fca30dbbe528916676b44821ffd65d4ac3be80d04dae510aabc6cc01d (image=quay.io/ceph/ceph:v20, name=frosty_shamir, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 23 18:34:20 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3883221591' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 23 18:34:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 23 18:34:21 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4258897405' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 23 18:34:21 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4258897405' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 23 18:34:22 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4258897405' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: mgr respawn  1: '-n'
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: mgr respawn  2: 'mgr.compute-0.rszfwe'
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: mgr respawn  3: '-f'
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: mgr respawn  4: '--setuser'
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: mgr respawn  5: 'ceph'
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: mgr respawn  6: '--setgroup'
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: mgr respawn  7: 'ceph'
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: mgr respawn  8: '--default-log-to-file=false'
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: mgr respawn  9: '--default-log-to-journald=true'
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: mgr respawn  exe_path /proc/self/exe
Jan 23 18:34:22 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.rszfwe(active, since 5s)
Jan 23 18:34:22 compute-0 systemd[1]: libpod-9ef3e75fca30dbbe528916676b44821ffd65d4ac3be80d04dae510aabc6cc01d.scope: Deactivated successfully.
Jan 23 18:34:22 compute-0 podman[75774]: 2026-01-23 18:34:22.035662128 +0000 UTC m=+1.563130417 container died 9ef3e75fca30dbbe528916676b44821ffd65d4ac3be80d04dae510aabc6cc01d (image=quay.io/ceph/ceph:v20, name=frosty_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 18:34:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e66a207bc4d4e208d6d18f177194be7297a9584120d276401a8840f57e679db-merged.mount: Deactivated successfully.
Jan 23 18:34:22 compute-0 podman[75774]: 2026-01-23 18:34:22.096143924 +0000 UTC m=+1.623612213 container remove 9ef3e75fca30dbbe528916676b44821ffd65d4ac3be80d04dae510aabc6cc01d (image=quay.io/ceph/ceph:v20, name=frosty_shamir, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:22 compute-0 systemd[1]: libpod-conmon-9ef3e75fca30dbbe528916676b44821ffd65d4ac3be80d04dae510aabc6cc01d.scope: Deactivated successfully.
Jan 23 18:34:22 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: ignoring --setuser ceph since I am not root
Jan 23 18:34:22 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: ignoring --setgroup ceph since I am not root
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: pidfile_write: ignore empty --pid-file
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'alerts'
Jan 23 18:34:22 compute-0 podman[75828]: 2026-01-23 18:34:22.168116807 +0000 UTC m=+0.053672218 container create 366b7df9d2ec4eda87cdf9f5200dda6714756c1c0f0e6c4dc2b414f1d47e0c1b (image=quay.io/ceph/ceph:v20, name=laughing_herschel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 18:34:22 compute-0 systemd[1]: Started libpod-conmon-366b7df9d2ec4eda87cdf9f5200dda6714756c1c0f0e6c4dc2b414f1d47e0c1b.scope.
Jan 23 18:34:22 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd680e5603888d6bd28de23b509d7b40f77068b10e443958ff44831d4df6abd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd680e5603888d6bd28de23b509d7b40f77068b10e443958ff44831d4df6abd0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd680e5603888d6bd28de23b509d7b40f77068b10e443958ff44831d4df6abd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:22 compute-0 podman[75828]: 2026-01-23 18:34:22.135952256 +0000 UTC m=+0.021507687 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:22 compute-0 podman[75828]: 2026-01-23 18:34:22.234276874 +0000 UTC m=+0.119832305 container init 366b7df9d2ec4eda87cdf9f5200dda6714756c1c0f0e6c4dc2b414f1d47e0c1b (image=quay.io/ceph/ceph:v20, name=laughing_herschel, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:22 compute-0 podman[75828]: 2026-01-23 18:34:22.23975888 +0000 UTC m=+0.125314321 container start 366b7df9d2ec4eda87cdf9f5200dda6714756c1c0f0e6c4dc2b414f1d47e0c1b (image=quay.io/ceph/ceph:v20, name=laughing_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'balancer'
Jan 23 18:34:22 compute-0 podman[75828]: 2026-01-23 18:34:22.257070211 +0000 UTC m=+0.142625642 container attach 366b7df9d2ec4eda87cdf9f5200dda6714756c1c0f0e6c4dc2b414f1d47e0c1b (image=quay.io/ceph/ceph:v20, name=laughing_herschel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:22 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'cephadm'
Jan 23 18:34:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 23 18:34:22 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2104833538' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 23 18:34:22 compute-0 laughing_herschel[75864]: {
Jan 23 18:34:22 compute-0 laughing_herschel[75864]:     "epoch": 5,
Jan 23 18:34:22 compute-0 laughing_herschel[75864]:     "available": true,
Jan 23 18:34:22 compute-0 laughing_herschel[75864]:     "active_name": "compute-0.rszfwe",
Jan 23 18:34:22 compute-0 laughing_herschel[75864]:     "num_standby": 0
Jan 23 18:34:22 compute-0 laughing_herschel[75864]: }
Jan 23 18:34:22 compute-0 systemd[1]: libpod-366b7df9d2ec4eda87cdf9f5200dda6714756c1c0f0e6c4dc2b414f1d47e0c1b.scope: Deactivated successfully.
Jan 23 18:34:22 compute-0 podman[75828]: 2026-01-23 18:34:22.746773663 +0000 UTC m=+0.632329094 container died 366b7df9d2ec4eda87cdf9f5200dda6714756c1c0f0e6c4dc2b414f1d47e0c1b (image=quay.io/ceph/ceph:v20, name=laughing_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 18:34:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd680e5603888d6bd28de23b509d7b40f77068b10e443958ff44831d4df6abd0-merged.mount: Deactivated successfully.
Jan 23 18:34:23 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'crash'
Jan 23 18:34:23 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'dashboard'
Jan 23 18:34:23 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4258897405' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 23 18:34:23 compute-0 ceph-mon[75097]: mgrmap e5: compute-0.rszfwe(active, since 5s)
Jan 23 18:34:23 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2104833538' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 23 18:34:23 compute-0 podman[75828]: 2026-01-23 18:34:23.246476214 +0000 UTC m=+1.132031665 container remove 366b7df9d2ec4eda87cdf9f5200dda6714756c1c0f0e6c4dc2b414f1d47e0c1b (image=quay.io/ceph/ceph:v20, name=laughing_herschel, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Jan 23 18:34:23 compute-0 systemd[1]: libpod-conmon-366b7df9d2ec4eda87cdf9f5200dda6714756c1c0f0e6c4dc2b414f1d47e0c1b.scope: Deactivated successfully.
Jan 23 18:34:23 compute-0 podman[75915]: 2026-01-23 18:34:23.360617965 +0000 UTC m=+0.095127838 container create 63165935015af8b842301fae4f2aae06c3abd6eef9f4bff5d3774119b59a0acf (image=quay.io/ceph/ceph:v20, name=vibrant_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 18:34:23 compute-0 podman[75915]: 2026-01-23 18:34:23.294852708 +0000 UTC m=+0.029362601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:23 compute-0 systemd[1]: Started libpod-conmon-63165935015af8b842301fae4f2aae06c3abd6eef9f4bff5d3774119b59a0acf.scope.
Jan 23 18:34:23 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf7fa6b08dcd1f2ae13ff5e8e42b6265103e43dc10b0338ad392b452e7d9373/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf7fa6b08dcd1f2ae13ff5e8e42b6265103e43dc10b0338ad392b452e7d9373/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf7fa6b08dcd1f2ae13ff5e8e42b6265103e43dc10b0338ad392b452e7d9373/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:23 compute-0 podman[75915]: 2026-01-23 18:34:23.461163519 +0000 UTC m=+0.195673412 container init 63165935015af8b842301fae4f2aae06c3abd6eef9f4bff5d3774119b59a0acf (image=quay.io/ceph/ceph:v20, name=vibrant_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 18:34:23 compute-0 podman[75915]: 2026-01-23 18:34:23.466276947 +0000 UTC m=+0.200786850 container start 63165935015af8b842301fae4f2aae06c3abd6eef9f4bff5d3774119b59a0acf (image=quay.io/ceph/ceph:v20, name=vibrant_goodall, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:23 compute-0 podman[75915]: 2026-01-23 18:34:23.470888521 +0000 UTC m=+0.205398444 container attach 63165935015af8b842301fae4f2aae06c3abd6eef9f4bff5d3774119b59a0acf (image=quay.io/ceph/ceph:v20, name=vibrant_goodall, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 18:34:23 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'devicehealth'
Jan 23 18:34:24 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'diskprediction_local'
Jan 23 18:34:24 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 23 18:34:24 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 23 18:34:24 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]:   from numpy import show_config as show_numpy_config
Jan 23 18:34:24 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'influx'
Jan 23 18:34:24 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'insights'
Jan 23 18:34:24 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'iostat'
Jan 23 18:34:24 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'k8sevents'
Jan 23 18:34:24 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'localpool'
Jan 23 18:34:24 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'mds_autoscaler'
Jan 23 18:34:25 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'mirroring'
Jan 23 18:34:25 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'nfs'
Jan 23 18:34:25 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'orchestrator'
Jan 23 18:34:25 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'osd_perf_query'
Jan 23 18:34:25 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'osd_support'
Jan 23 18:34:25 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'pg_autoscaler'
Jan 23 18:34:25 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'progress'
Jan 23 18:34:26 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'prometheus'
Jan 23 18:34:26 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'rbd_support'
Jan 23 18:34:26 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'rgw'
Jan 23 18:34:26 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'rook'
Jan 23 18:34:27 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'selftest'
Jan 23 18:34:27 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'smb'
Jan 23 18:34:27 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'snap_schedule'
Jan 23 18:34:27 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'stats'
Jan 23 18:34:27 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'status'
Jan 23 18:34:28 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'telegraf'
Jan 23 18:34:28 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'telemetry'
Jan 23 18:34:28 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'test_orchestrator'
Jan 23 18:34:28 compute-0 ceph-mgr[75390]: mgr[py] Loading python module 'volumes'
Jan 23 18:34:28 compute-0 ceph-mon[75097]: log_channel(cluster) log [INF] : Active manager daemon compute-0.rszfwe restarted
Jan 23 18:34:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 23 18:34:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 18:34:28 compute-0 ceph-mon[75097]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.rszfwe
Jan 23 18:34:28 compute-0 ceph-mgr[75390]: ms_deliver_dispatch: unhandled message 0x5642ec006000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 23 18:34:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Jan 23 18:34:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 23 18:34:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 23 18:34:28 compute-0 ceph-mgr[75390]: mgr handle_mgr_map Activating!
Jan 23 18:34:28 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 23 18:34:28 compute-0 ceph-mgr[75390]: mgr handle_mgr_map I am now activating
Jan 23 18:34:28 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.rszfwe(active, starting, since 0.0163828s)
Jan 23 18:34:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 23 18:34:28 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 23 18:34:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.rszfwe", "id": "compute-0.rszfwe"} v 0)
Jan 23 18:34:28 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mgr metadata", "who": "compute-0.rszfwe", "id": "compute-0.rszfwe"} : dispatch
Jan 23 18:34:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 23 18:34:28 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mds metadata"} : dispatch
Jan 23 18:34:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).mds e1 all = 1
Jan 23 18:34:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 23 18:34:28 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata"} : dispatch
Jan 23 18:34:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 23 18:34:28 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mon metadata"} : dispatch
Jan 23 18:34:28 compute-0 ceph-mgr[75390]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:28 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: balancer
Jan 23 18:34:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Starting
Jan 23 18:34:28 compute-0 ceph-mgr[75390]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:28 compute-0 ceph-mon[75097]: log_channel(cluster) log [INF] : Manager daemon compute-0.rszfwe is now available
Jan 23 18:34:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:34:28
Jan 23 18:34:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:34:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:34:28 compute-0 ceph-mgr[75390]: [balancer INFO root] No pools available
Jan 23 18:34:28 compute-0 ceph-mon[75097]: Active manager daemon compute-0.rszfwe restarted
Jan 23 18:34:28 compute-0 ceph-mon[75097]: Activating manager daemon compute-0.rszfwe
Jan 23 18:34:28 compute-0 ceph-mon[75097]: osdmap e2: 0 total, 0 up, 0 in
Jan 23 18:34:28 compute-0 ceph-mon[75097]: mgrmap e6: compute-0.rszfwe(active, starting, since 0.0163828s)
Jan 23 18:34:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 23 18:34:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mgr metadata", "who": "compute-0.rszfwe", "id": "compute-0.rszfwe"} : dispatch
Jan 23 18:34:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mds metadata"} : dispatch
Jan 23 18:34:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata"} : dispatch
Jan 23 18:34:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mon metadata"} : dispatch
Jan 23 18:34:28 compute-0 ceph-mon[75097]: Manager daemon compute-0.rszfwe is now available
Jan 23 18:34:29 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 23 18:34:29 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.rszfwe(active, since 1.0264s)
Jan 23 18:34:29 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 23 18:34:29 compute-0 vibrant_goodall[75932]: {
Jan 23 18:34:29 compute-0 vibrant_goodall[75932]:     "mgrmap_epoch": 7,
Jan 23 18:34:29 compute-0 vibrant_goodall[75932]:     "initialized": true
Jan 23 18:34:29 compute-0 vibrant_goodall[75932]: }
Jan 23 18:34:29 compute-0 systemd[1]: libpod-63165935015af8b842301fae4f2aae06c3abd6eef9f4bff5d3774119b59a0acf.scope: Deactivated successfully.
Jan 23 18:34:29 compute-0 podman[75915]: 2026-01-23 18:34:29.87927736 +0000 UTC m=+6.613787263 container died 63165935015af8b842301fae4f2aae06c3abd6eef9f4bff5d3774119b59a0acf (image=quay.io/ceph/ceph:v20, name=vibrant_goodall, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:34:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaf7fa6b08dcd1f2ae13ff5e8e42b6265103e43dc10b0338ad392b452e7d9373-merged.mount: Deactivated successfully.
Jan 23 18:34:29 compute-0 podman[75915]: 2026-01-23 18:34:29.922959497 +0000 UTC m=+6.657469360 container remove 63165935015af8b842301fae4f2aae06c3abd6eef9f4bff5d3774119b59a0acf (image=quay.io/ceph/ceph:v20, name=vibrant_goodall, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 18:34:29 compute-0 systemd[1]: libpod-conmon-63165935015af8b842301fae4f2aae06c3abd6eef9f4bff5d3774119b59a0acf.scope: Deactivated successfully.
Jan 23 18:34:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Jan 23 18:34:29 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Jan 23 18:34:29 compute-0 podman[76001]: 2026-01-23 18:34:29.985697929 +0000 UTC m=+0.040989121 container create 48b680dc5811a0852d81c98659fcf38e95821bc0fd203ce765c1dd1e2b1ea27c (image=quay.io/ceph/ceph:v20, name=funny_khayyam, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 18:34:29 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:29 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 23 18:34:29 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 23 18:34:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 23 18:34:29 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 23 18:34:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: cephadm
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: crash
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: devicehealth
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [devicehealth INFO root] Starting
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: iostat
Jan 23 18:34:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 23 18:34:30 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: nfs
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: orchestrator
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: pg_autoscaler
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: progress
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:30 compute-0 systemd[1]: Started libpod-conmon-48b680dc5811a0852d81c98659fcf38e95821bc0fd203ce765c1dd1e2b1ea27c.scope.
Jan 23 18:34:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [progress INFO root] Loading...
Jan 23 18:34:30 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [progress INFO root] No stored events to load
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [progress INFO root] Loaded [] historic events
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [progress INFO root] Loaded OSDMap, ready.
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] recovery thread starting
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] starting setup
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: rbd_support
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: status
Jan 23 18:34:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rszfwe/mirror_snapshot_schedule"} v 0)
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rszfwe/mirror_snapshot_schedule"} : dispatch
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: telemetry
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] PerfHandler: starting
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TaskHandler: starting
Jan 23 18:34:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rszfwe/trash_purge_schedule"} v 0)
Jan 23 18:34:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rszfwe/trash_purge_schedule"} : dispatch
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] setup complete
Jan 23 18:34:30 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee05e3a2325b67e0f343f1d1ce501122cb7b593d66b08ea72a553c57a18f41da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee05e3a2325b67e0f343f1d1ce501122cb7b593d66b08ea72a553c57a18f41da/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee05e3a2325b67e0f343f1d1ce501122cb7b593d66b08ea72a553c57a18f41da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: mgr load Constructed class from module: volumes
Jan 23 18:34:30 compute-0 podman[76001]: 2026-01-23 18:34:30.057406704 +0000 UTC m=+0.112697906 container init 48b680dc5811a0852d81c98659fcf38e95821bc0fd203ce765c1dd1e2b1ea27c (image=quay.io/ceph/ceph:v20, name=funny_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 23 18:34:30 compute-0 podman[76001]: 2026-01-23 18:34:29.96643806 +0000 UTC m=+0.021729292 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:30 compute-0 podman[76001]: 2026-01-23 18:34:30.063339863 +0000 UTC m=+0.118631055 container start 48b680dc5811a0852d81c98659fcf38e95821bc0fd203ce765c1dd1e2b1ea27c (image=quay.io/ceph/ceph:v20, name=funny_khayyam, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:30 compute-0 podman[76001]: 2026-01-23 18:34:30.066999973 +0000 UTC m=+0.122291165 container attach 48b680dc5811a0852d81c98659fcf38e95821bc0fd203ce765c1dd1e2b1ea27c (image=quay.io/ceph/ceph:v20, name=funny_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Jan 23 18:34:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/234916869' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 23 18:34:30 compute-0 ceph-mgr[75390]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 18:34:30 compute-0 ceph-mon[75097]: mgrmap e7: compute-0.rszfwe(active, since 1.0264s)
Jan 23 18:34:30 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:30 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:30 compute-0 ceph-mon[75097]: Found migration_current of "None". Setting to last migration.
Jan 23 18:34:30 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:30 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:30 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 23 18:34:30 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 23 18:34:30 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rszfwe/mirror_snapshot_schedule"} : dispatch
Jan 23 18:34:30 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rszfwe/trash_purge_schedule"} : dispatch
Jan 23 18:34:30 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/234916869' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 23 18:34:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/234916869' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 23 18:34:31 compute-0 funny_khayyam[76046]: module 'orchestrator' is already enabled (always-on)
Jan 23 18:34:31 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.rszfwe(active, since 2s)
Jan 23 18:34:31 compute-0 systemd[1]: libpod-48b680dc5811a0852d81c98659fcf38e95821bc0fd203ce765c1dd1e2b1ea27c.scope: Deactivated successfully.
Jan 23 18:34:31 compute-0 podman[76001]: 2026-01-23 18:34:31.036691106 +0000 UTC m=+1.091982288 container died 48b680dc5811a0852d81c98659fcf38e95821bc0fd203ce765c1dd1e2b1ea27c (image=quay.io/ceph/ceph:v20, name=funny_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 18:34:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee05e3a2325b67e0f343f1d1ce501122cb7b593d66b08ea72a553c57a18f41da-merged.mount: Deactivated successfully.
Jan 23 18:34:31 compute-0 podman[76001]: 2026-01-23 18:34:31.08665827 +0000 UTC m=+1.141949452 container remove 48b680dc5811a0852d81c98659fcf38e95821bc0fd203ce765c1dd1e2b1ea27c (image=quay.io/ceph/ceph:v20, name=funny_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:34:31 compute-0 systemd[1]: libpod-conmon-48b680dc5811a0852d81c98659fcf38e95821bc0fd203ce765c1dd1e2b1ea27c.scope: Deactivated successfully.
Jan 23 18:34:31 compute-0 podman[76133]: 2026-01-23 18:34:31.166480628 +0000 UTC m=+0.050731905 container create 11a51105b001f6405a5bd89e3fe225dddc3c952e968651ecd1b5d243ae2d8b8e (image=quay.io/ceph/ceph:v20, name=sharp_torvalds, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:34:31 compute-0 systemd[1]: Started libpod-conmon-11a51105b001f6405a5bd89e3fe225dddc3c952e968651ecd1b5d243ae2d8b8e.scope.
Jan 23 18:34:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc154ba51ddf1e2899a600c126cb216f89eb10b3840cbac0d41370dedbf062f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc154ba51ddf1e2899a600c126cb216f89eb10b3840cbac0d41370dedbf062f4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc154ba51ddf1e2899a600c126cb216f89eb10b3840cbac0d41370dedbf062f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:31 compute-0 podman[76133]: 2026-01-23 18:34:31.14408783 +0000 UTC m=+0.028339117 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:31 compute-0 podman[76133]: 2026-01-23 18:34:31.243911045 +0000 UTC m=+0.128162322 container init 11a51105b001f6405a5bd89e3fe225dddc3c952e968651ecd1b5d243ae2d8b8e (image=quay.io/ceph/ceph:v20, name=sharp_torvalds, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:31 compute-0 podman[76133]: 2026-01-23 18:34:31.249500464 +0000 UTC m=+0.133751731 container start 11a51105b001f6405a5bd89e3fe225dddc3c952e968651ecd1b5d243ae2d8b8e (image=quay.io/ceph/ceph:v20, name=sharp_torvalds, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:31 compute-0 podman[76133]: 2026-01-23 18:34:31.252611152 +0000 UTC m=+0.136862419 container attach 11a51105b001f6405a5bd89e3fe225dddc3c952e968651ecd1b5d243ae2d8b8e (image=quay.io/ceph/ceph:v20, name=sharp_torvalds, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:34:31 compute-0 ceph-mgr[75390]: [cephadm INFO cherrypy.error] [23/Jan/2026:18:34:31] ENGINE Bus STARTING
Jan 23 18:34:31 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : [23/Jan/2026:18:34:31] ENGINE Bus STARTING
Jan 23 18:34:31 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 23 18:34:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 23 18:34:31 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 23 18:34:31 compute-0 systemd[1]: libpod-11a51105b001f6405a5bd89e3fe225dddc3c952e968651ecd1b5d243ae2d8b8e.scope: Deactivated successfully.
Jan 23 18:34:31 compute-0 podman[76133]: 2026-01-23 18:34:31.706863591 +0000 UTC m=+0.591114868 container died 11a51105b001f6405a5bd89e3fe225dddc3c952e968651ecd1b5d243ae2d8b8e (image=quay.io/ceph/ceph:v20, name=sharp_torvalds, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc154ba51ddf1e2899a600c126cb216f89eb10b3840cbac0d41370dedbf062f4-merged.mount: Deactivated successfully.
Jan 23 18:34:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019906223 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:34:31 compute-0 podman[76133]: 2026-01-23 18:34:31.747100493 +0000 UTC m=+0.631351770 container remove 11a51105b001f6405a5bd89e3fe225dddc3c952e968651ecd1b5d243ae2d8b8e (image=quay.io/ceph/ceph:v20, name=sharp_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:31 compute-0 ceph-mgr[75390]: [cephadm INFO cherrypy.error] [23/Jan/2026:18:34:31] ENGINE Serving on http://192.168.122.100:8765
Jan 23 18:34:31 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : [23/Jan/2026:18:34:31] ENGINE Serving on http://192.168.122.100:8765
Jan 23 18:34:31 compute-0 systemd[1]: libpod-conmon-11a51105b001f6405a5bd89e3fe225dddc3c952e968651ecd1b5d243ae2d8b8e.scope: Deactivated successfully.
Jan 23 18:34:31 compute-0 podman[76202]: 2026-01-23 18:34:31.8044269 +0000 UTC m=+0.039416392 container create 55a88b3a64cef5373c35548c99b9c3c7734f87dcbad0a0e9c2a4233e4acd8ec9 (image=quay.io/ceph/ceph:v20, name=pensive_curran, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 18:34:31 compute-0 systemd[1]: Started libpod-conmon-55a88b3a64cef5373c35548c99b9c3c7734f87dcbad0a0e9c2a4233e4acd8ec9.scope.
Jan 23 18:34:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d3c5498c75b435ebe95d62e7be209603ee503e3431f37fd9955cbc8db1b0c2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d3c5498c75b435ebe95d62e7be209603ee503e3431f37fd9955cbc8db1b0c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d3c5498c75b435ebe95d62e7be209603ee503e3431f37fd9955cbc8db1b0c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:31 compute-0 podman[76202]: 2026-01-23 18:34:31.869069709 +0000 UTC m=+0.104059251 container init 55a88b3a64cef5373c35548c99b9c3c7734f87dcbad0a0e9c2a4233e4acd8ec9 (image=quay.io/ceph/ceph:v20, name=pensive_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:31 compute-0 podman[76202]: 2026-01-23 18:34:31.874164677 +0000 UTC m=+0.109154189 container start 55a88b3a64cef5373c35548c99b9c3c7734f87dcbad0a0e9c2a4233e4acd8ec9 (image=quay.io/ceph/ceph:v20, name=pensive_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 23 18:34:31 compute-0 ceph-mgr[75390]: [cephadm INFO cherrypy.error] [23/Jan/2026:18:34:31] ENGINE Serving on https://192.168.122.100:7150
Jan 23 18:34:31 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : [23/Jan/2026:18:34:31] ENGINE Serving on https://192.168.122.100:7150
Jan 23 18:34:31 compute-0 ceph-mgr[75390]: [cephadm INFO cherrypy.error] [23/Jan/2026:18:34:31] ENGINE Bus STARTED
Jan 23 18:34:31 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : [23/Jan/2026:18:34:31] ENGINE Bus STARTED
Jan 23 18:34:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 23 18:34:31 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 23 18:34:31 compute-0 ceph-mgr[75390]: [cephadm INFO cherrypy.error] [23/Jan/2026:18:34:31] ENGINE Client ('192.168.122.100', 49676) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 23 18:34:31 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : [23/Jan/2026:18:34:31] ENGINE Client ('192.168.122.100', 49676) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 23 18:34:31 compute-0 podman[76202]: 2026-01-23 18:34:31.878777191 +0000 UTC m=+0.113766703 container attach 55a88b3a64cef5373c35548c99b9c3c7734f87dcbad0a0e9c2a4233e4acd8ec9 (image=quay.io/ceph/ceph:v20, name=pensive_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 23 18:34:31 compute-0 podman[76202]: 2026-01-23 18:34:31.785875048 +0000 UTC m=+0.020864560 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:32 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:34:32 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/234916869' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 23 18:34:32 compute-0 ceph-mon[75097]: mgrmap e8: compute-0.rszfwe(active, since 2s)
Jan 23 18:34:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 23 18:34:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 23 18:34:32 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 23 18:34:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:32 compute-0 ceph-mgr[75390]: [cephadm INFO root] Set ssh ssh_user
Jan 23 18:34:32 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 23 18:34:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 23 18:34:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:32 compute-0 ceph-mgr[75390]: [cephadm INFO root] Set ssh ssh_config
Jan 23 18:34:32 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 23 18:34:32 compute-0 ceph-mgr[75390]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 23 18:34:32 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 23 18:34:32 compute-0 pensive_curran[76231]: ssh user set to ceph-admin. sudo will be used
Jan 23 18:34:32 compute-0 systemd[1]: libpod-55a88b3a64cef5373c35548c99b9c3c7734f87dcbad0a0e9c2a4233e4acd8ec9.scope: Deactivated successfully.
Jan 23 18:34:32 compute-0 podman[76202]: 2026-01-23 18:34:32.337018891 +0000 UTC m=+0.572008383 container died 55a88b3a64cef5373c35548c99b9c3c7734f87dcbad0a0e9c2a4233e4acd8ec9 (image=quay.io/ceph/ceph:v20, name=pensive_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 23 18:34:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-68d3c5498c75b435ebe95d62e7be209603ee503e3431f37fd9955cbc8db1b0c2-merged.mount: Deactivated successfully.
Jan 23 18:34:32 compute-0 podman[76202]: 2026-01-23 18:34:32.375132499 +0000 UTC m=+0.610121991 container remove 55a88b3a64cef5373c35548c99b9c3c7734f87dcbad0a0e9c2a4233e4acd8ec9 (image=quay.io/ceph/ceph:v20, name=pensive_curran, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:32 compute-0 systemd[1]: libpod-conmon-55a88b3a64cef5373c35548c99b9c3c7734f87dcbad0a0e9c2a4233e4acd8ec9.scope: Deactivated successfully.
Jan 23 18:34:32 compute-0 podman[76269]: 2026-01-23 18:34:32.432805185 +0000 UTC m=+0.039772981 container create 217b51917219679c9abc39a4220dd83c97d32401869038a46f2512400481a9ff (image=quay.io/ceph/ceph:v20, name=elastic_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:34:32 compute-0 systemd[1]: Started libpod-conmon-217b51917219679c9abc39a4220dd83c97d32401869038a46f2512400481a9ff.scope.
Jan 23 18:34:32 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61b1c122a01578ca5e977b6c165097108cd46710e94d35d9308fe3bd51975fef/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61b1c122a01578ca5e977b6c165097108cd46710e94d35d9308fe3bd51975fef/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61b1c122a01578ca5e977b6c165097108cd46710e94d35d9308fe3bd51975fef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61b1c122a01578ca5e977b6c165097108cd46710e94d35d9308fe3bd51975fef/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61b1c122a01578ca5e977b6c165097108cd46710e94d35d9308fe3bd51975fef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:32 compute-0 podman[76269]: 2026-01-23 18:34:32.417153525 +0000 UTC m=+0.024121351 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:32 compute-0 podman[76269]: 2026-01-23 18:34:32.671939128 +0000 UTC m=+0.278906934 container init 217b51917219679c9abc39a4220dd83c97d32401869038a46f2512400481a9ff (image=quay.io/ceph/ceph:v20, name=elastic_black, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:34:32 compute-0 podman[76269]: 2026-01-23 18:34:32.67764033 +0000 UTC m=+0.284608136 container start 217b51917219679c9abc39a4220dd83c97d32401869038a46f2512400481a9ff (image=quay.io/ceph/ceph:v20, name=elastic_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 18:34:32 compute-0 podman[76269]: 2026-01-23 18:34:32.700851717 +0000 UTC m=+0.307819543 container attach 217b51917219679c9abc39a4220dd83c97d32401869038a46f2512400481a9ff (image=quay.io/ceph/ceph:v20, name=elastic_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 23 18:34:32 compute-0 ceph-mgr[75390]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 18:34:33 compute-0 ceph-mon[75097]: [23/Jan/2026:18:34:31] ENGINE Bus STARTING
Jan 23 18:34:33 compute-0 ceph-mon[75097]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:33 compute-0 ceph-mon[75097]: [23/Jan/2026:18:34:31] ENGINE Serving on http://192.168.122.100:8765
Jan 23 18:34:33 compute-0 ceph-mon[75097]: [23/Jan/2026:18:34:31] ENGINE Serving on https://192.168.122.100:7150
Jan 23 18:34:33 compute-0 ceph-mon[75097]: [23/Jan/2026:18:34:31] ENGINE Bus STARTED
Jan 23 18:34:33 compute-0 ceph-mon[75097]: [23/Jan/2026:18:34:31] ENGINE Client ('192.168.122.100', 49676) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 23 18:34:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:33 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 23 18:34:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:33 compute-0 ceph-mgr[75390]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 23 18:34:33 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 23 18:34:33 compute-0 ceph-mgr[75390]: [cephadm INFO root] Set ssh private key
Jan 23 18:34:33 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 23 18:34:33 compute-0 systemd[1]: libpod-217b51917219679c9abc39a4220dd83c97d32401869038a46f2512400481a9ff.scope: Deactivated successfully.
Jan 23 18:34:33 compute-0 podman[76269]: 2026-01-23 18:34:33.120460365 +0000 UTC m=+0.727428161 container died 217b51917219679c9abc39a4220dd83c97d32401869038a46f2512400481a9ff (image=quay.io/ceph/ceph:v20, name=elastic_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 23 18:34:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-61b1c122a01578ca5e977b6c165097108cd46710e94d35d9308fe3bd51975fef-merged.mount: Deactivated successfully.
Jan 23 18:34:33 compute-0 podman[76269]: 2026-01-23 18:34:33.167851195 +0000 UTC m=+0.774818991 container remove 217b51917219679c9abc39a4220dd83c97d32401869038a46f2512400481a9ff (image=quay.io/ceph/ceph:v20, name=elastic_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 23 18:34:33 compute-0 systemd[1]: libpod-conmon-217b51917219679c9abc39a4220dd83c97d32401869038a46f2512400481a9ff.scope: Deactivated successfully.
Jan 23 18:34:33 compute-0 podman[76325]: 2026-01-23 18:34:33.223342297 +0000 UTC m=+0.036454058 container create 7786dbe1cf632859be0c9ce0482289c0227d3517115a1612af4a4c219aabf238 (image=quay.io/ceph/ceph:v20, name=cranky_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 18:34:33 compute-0 systemd[1]: Started libpod-conmon-7786dbe1cf632859be0c9ce0482289c0227d3517115a1612af4a4c219aabf238.scope.
Jan 23 18:34:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aabdc47e95d2b2023cef8397b0723b7bea8c70361f795b21cae2198fc775f6e9/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aabdc47e95d2b2023cef8397b0723b7bea8c70361f795b21cae2198fc775f6e9/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aabdc47e95d2b2023cef8397b0723b7bea8c70361f795b21cae2198fc775f6e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aabdc47e95d2b2023cef8397b0723b7bea8c70361f795b21cae2198fc775f6e9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aabdc47e95d2b2023cef8397b0723b7bea8c70361f795b21cae2198fc775f6e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:33 compute-0 podman[76325]: 2026-01-23 18:34:33.278966211 +0000 UTC m=+0.092077992 container init 7786dbe1cf632859be0c9ce0482289c0227d3517115a1612af4a4c219aabf238 (image=quay.io/ceph/ceph:v20, name=cranky_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 18:34:33 compute-0 podman[76325]: 2026-01-23 18:34:33.287145315 +0000 UTC m=+0.100257076 container start 7786dbe1cf632859be0c9ce0482289c0227d3517115a1612af4a4c219aabf238 (image=quay.io/ceph/ceph:v20, name=cranky_benz, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 23 18:34:33 compute-0 podman[76325]: 2026-01-23 18:34:33.290469299 +0000 UTC m=+0.103581110 container attach 7786dbe1cf632859be0c9ce0482289c0227d3517115a1612af4a4c219aabf238 (image=quay.io/ceph/ceph:v20, name=cranky_benz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:34:33 compute-0 podman[76325]: 2026-01-23 18:34:33.206778595 +0000 UTC m=+0.019890376 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:33 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 23 18:34:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:33 compute-0 ceph-mgr[75390]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 23 18:34:33 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 23 18:34:33 compute-0 systemd[1]: libpod-7786dbe1cf632859be0c9ce0482289c0227d3517115a1612af4a4c219aabf238.scope: Deactivated successfully.
Jan 23 18:34:33 compute-0 podman[76325]: 2026-01-23 18:34:33.733393046 +0000 UTC m=+0.546504817 container died 7786dbe1cf632859be0c9ce0482289c0227d3517115a1612af4a4c219aabf238 (image=quay.io/ceph/ceph:v20, name=cranky_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-aabdc47e95d2b2023cef8397b0723b7bea8c70361f795b21cae2198fc775f6e9-merged.mount: Deactivated successfully.
Jan 23 18:34:33 compute-0 podman[76325]: 2026-01-23 18:34:33.768339596 +0000 UTC m=+0.581451357 container remove 7786dbe1cf632859be0c9ce0482289c0227d3517115a1612af4a4c219aabf238 (image=quay.io/ceph/ceph:v20, name=cranky_benz, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:34:33 compute-0 systemd[1]: libpod-conmon-7786dbe1cf632859be0c9ce0482289c0227d3517115a1612af4a4c219aabf238.scope: Deactivated successfully.
Jan 23 18:34:33 compute-0 podman[76379]: 2026-01-23 18:34:33.822769721 +0000 UTC m=+0.036944441 container create d6d4a01ee1bf3fbbddbefb9454191a88f6f792fe848db933422b77360ddfca55 (image=quay.io/ceph/ceph:v20, name=amazing_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 18:34:33 compute-0 systemd[1]: Started libpod-conmon-d6d4a01ee1bf3fbbddbefb9454191a88f6f792fe848db933422b77360ddfca55.scope.
Jan 23 18:34:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96c0c15aea917b18213a88688b5e5759cde93f7203d3b49b3800635ab5c8e1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96c0c15aea917b18213a88688b5e5759cde93f7203d3b49b3800635ab5c8e1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96c0c15aea917b18213a88688b5e5759cde93f7203d3b49b3800635ab5c8e1c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:33 compute-0 podman[76379]: 2026-01-23 18:34:33.876054248 +0000 UTC m=+0.090228968 container init d6d4a01ee1bf3fbbddbefb9454191a88f6f792fe848db933422b77360ddfca55 (image=quay.io/ceph/ceph:v20, name=amazing_hopper, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 18:34:33 compute-0 podman[76379]: 2026-01-23 18:34:33.881167504 +0000 UTC m=+0.095342234 container start d6d4a01ee1bf3fbbddbefb9454191a88f6f792fe848db933422b77360ddfca55 (image=quay.io/ceph/ceph:v20, name=amazing_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 18:34:33 compute-0 podman[76379]: 2026-01-23 18:34:33.884922408 +0000 UTC m=+0.099097148 container attach d6d4a01ee1bf3fbbddbefb9454191a88f6f792fe848db933422b77360ddfca55 (image=quay.io/ceph/ceph:v20, name=amazing_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:33 compute-0 podman[76379]: 2026-01-23 18:34:33.807432549 +0000 UTC m=+0.021607289 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:34 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:34:34 compute-0 ceph-mon[75097]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:34 compute-0 ceph-mon[75097]: Set ssh ssh_user
Jan 23 18:34:34 compute-0 ceph-mon[75097]: Set ssh ssh_config
Jan 23 18:34:34 compute-0 ceph-mon[75097]: ssh user set to ceph-admin. sudo will be used
Jan 23 18:34:34 compute-0 ceph-mon[75097]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:34 compute-0 ceph-mon[75097]: Set ssh ssh_identity_key
Jan 23 18:34:34 compute-0 ceph-mon[75097]: Set ssh private key
Jan 23 18:34:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:34 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:34 compute-0 amazing_hopper[76395]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZVaDN2UV89gWjzc6G6nOSxgG/rfqXKP/lPiHdwyc6vfe/1GVieryMwlGXJbaPdRkrz8kjooOnFqvlimQNdRG0gBro1dclqZutxGYeQYA/0EWZyVQ6Cxy0Bi202AcVa14qwgGgpP2irI4DyOEfJuK+1E8+eU5r/9ol2S3viZWVbyMRAAWmQ4gA8iA0TSjTjjt02VDwrgUbHrMQQOE8I8tMZ3pQUM7RcwMuoI4Tx0topofQsJC52KoI/vEDM/0Wjzv7g6LUWCiUl1QFOBtRK4deFtm6VdX9rXMgReiqPSWr+Po2bJtf1P5YNAn55AeYct/rAEnTlvLBvXB8d3fV7Z8Iyun9WBOSffpezvSCVKdSEdxWXWR8x5hojvcs1tjCaAI8fdQVnxc9Y72DdRiBT7anJe+nB3xezaxDjZszAJDemz6tSEvrCvbt/y3n0eyeSpNoCzQXqBUgW7QjXHG5bqxYZFSqL0goF9RTGvcP6f4E+m/6e26LiJUVex9aK7Kvczs= zuul@controller
Jan 23 18:34:34 compute-0 systemd[1]: libpod-d6d4a01ee1bf3fbbddbefb9454191a88f6f792fe848db933422b77360ddfca55.scope: Deactivated successfully.
Jan 23 18:34:34 compute-0 podman[76379]: 2026-01-23 18:34:34.310250368 +0000 UTC m=+0.524425088 container died d6d4a01ee1bf3fbbddbefb9454191a88f6f792fe848db933422b77360ddfca55 (image=quay.io/ceph/ceph:v20, name=amazing_hopper, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-e96c0c15aea917b18213a88688b5e5759cde93f7203d3b49b3800635ab5c8e1c-merged.mount: Deactivated successfully.
Jan 23 18:34:34 compute-0 podman[76379]: 2026-01-23 18:34:34.350399677 +0000 UTC m=+0.564574397 container remove d6d4a01ee1bf3fbbddbefb9454191a88f6f792fe848db933422b77360ddfca55 (image=quay.io/ceph/ceph:v20, name=amazing_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 18:34:34 compute-0 systemd[1]: libpod-conmon-d6d4a01ee1bf3fbbddbefb9454191a88f6f792fe848db933422b77360ddfca55.scope: Deactivated successfully.
Jan 23 18:34:34 compute-0 podman[76433]: 2026-01-23 18:34:34.414055322 +0000 UTC m=+0.042075299 container create 2fe220548c5b073161f4804fc6ede7bf1f628734d906d7ce530c269a9e84af64 (image=quay.io/ceph/ceph:v20, name=focused_antonelli, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 18:34:34 compute-0 systemd[1]: Started libpod-conmon-2fe220548c5b073161f4804fc6ede7bf1f628734d906d7ce530c269a9e84af64.scope.
Jan 23 18:34:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8833faf19f5285761078a52aa38ac9f0d68e1be5d3752aa3aca32ed284e981/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8833faf19f5285761078a52aa38ac9f0d68e1be5d3752aa3aca32ed284e981/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8833faf19f5285761078a52aa38ac9f0d68e1be5d3752aa3aca32ed284e981/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:34 compute-0 podman[76433]: 2026-01-23 18:34:34.485949641 +0000 UTC m=+0.113969598 container init 2fe220548c5b073161f4804fc6ede7bf1f628734d906d7ce530c269a9e84af64 (image=quay.io/ceph/ceph:v20, name=focused_antonelli, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 18:34:34 compute-0 podman[76433]: 2026-01-23 18:34:34.490781912 +0000 UTC m=+0.118801869 container start 2fe220548c5b073161f4804fc6ede7bf1f628734d906d7ce530c269a9e84af64 (image=quay.io/ceph/ceph:v20, name=focused_antonelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 18:34:34 compute-0 podman[76433]: 2026-01-23 18:34:34.396867084 +0000 UTC m=+0.024887061 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:34 compute-0 podman[76433]: 2026-01-23 18:34:34.496148426 +0000 UTC m=+0.124168383 container attach 2fe220548c5b073161f4804fc6ede7bf1f628734d906d7ce530c269a9e84af64 (image=quay.io/ceph/ceph:v20, name=focused_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:34 compute-0 ceph-mgr[75390]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 18:34:34 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:35 compute-0 sshd-session[76476]: Accepted publickey for ceph-admin from 192.168.122.100 port 42658 ssh2: RSA SHA256:Y9YAWoV/FtycHMpA7ujLfTFoetVzWSRVzNMfrWybZ/s
Jan 23 18:34:35 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 23 18:34:35 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 23 18:34:35 compute-0 systemd-logind[792]: New session 20 of user ceph-admin.
Jan 23 18:34:35 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 23 18:34:35 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 23 18:34:35 compute-0 systemd[76480]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 18:34:35 compute-0 ceph-mon[75097]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:35 compute-0 ceph-mon[75097]: Set ssh ssh_identity_pub
Jan 23 18:34:35 compute-0 ceph-mon[75097]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:35 compute-0 ceph-mon[75097]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:35 compute-0 sshd-session[76493]: Accepted publickey for ceph-admin from 192.168.122.100 port 42666 ssh2: RSA SHA256:Y9YAWoV/FtycHMpA7ujLfTFoetVzWSRVzNMfrWybZ/s
Jan 23 18:34:35 compute-0 systemd[76480]: Queued start job for default target Main User Target.
Jan 23 18:34:35 compute-0 systemd-logind[792]: New session 22 of user ceph-admin.
Jan 23 18:34:35 compute-0 systemd[76480]: Created slice User Application Slice.
Jan 23 18:34:35 compute-0 systemd[76480]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 23 18:34:35 compute-0 systemd[76480]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 18:34:35 compute-0 systemd[76480]: Reached target Paths.
Jan 23 18:34:35 compute-0 systemd[76480]: Reached target Timers.
Jan 23 18:34:35 compute-0 systemd[76480]: Starting D-Bus User Message Bus Socket...
Jan 23 18:34:35 compute-0 systemd[76480]: Starting Create User's Volatile Files and Directories...
Jan 23 18:34:35 compute-0 systemd[76480]: Listening on D-Bus User Message Bus Socket.
Jan 23 18:34:35 compute-0 systemd[76480]: Finished Create User's Volatile Files and Directories.
Jan 23 18:34:35 compute-0 systemd[76480]: Reached target Sockets.
Jan 23 18:34:35 compute-0 systemd[76480]: Reached target Basic System.
Jan 23 18:34:35 compute-0 systemd[76480]: Reached target Main User Target.
Jan 23 18:34:35 compute-0 systemd[76480]: Startup finished in 150ms.
Jan 23 18:34:35 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 23 18:34:35 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Jan 23 18:34:35 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Jan 23 18:34:35 compute-0 sshd-session[76476]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 18:34:35 compute-0 sshd-session[76493]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 18:34:35 compute-0 sudo[76501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:34:35 compute-0 sudo[76501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:35 compute-0 sudo[76501]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:35 compute-0 sshd-session[76526]: Accepted publickey for ceph-admin from 192.168.122.100 port 42674 ssh2: RSA SHA256:Y9YAWoV/FtycHMpA7ujLfTFoetVzWSRVzNMfrWybZ/s
Jan 23 18:34:35 compute-0 systemd-logind[792]: New session 23 of user ceph-admin.
Jan 23 18:34:35 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Jan 23 18:34:35 compute-0 sshd-session[76526]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 18:34:35 compute-0 sudo[76530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 23 18:34:35 compute-0 sudo[76530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:35 compute-0 sudo[76530]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:35 compute-0 sshd-session[76555]: Accepted publickey for ceph-admin from 192.168.122.100 port 42676 ssh2: RSA SHA256:Y9YAWoV/FtycHMpA7ujLfTFoetVzWSRVzNMfrWybZ/s
Jan 23 18:34:35 compute-0 systemd-logind[792]: New session 24 of user ceph-admin.
Jan 23 18:34:35 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Jan 23 18:34:35 compute-0 sshd-session[76555]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 18:34:36 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:34:36 compute-0 sudo[76559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Jan 23 18:34:36 compute-0 sudo[76559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:36 compute-0 sudo[76559]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:36 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 23 18:34:36 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 23 18:34:36 compute-0 ceph-mon[75097]: Deploying cephadm binary to compute-0
Jan 23 18:34:36 compute-0 sshd-session[76584]: Accepted publickey for ceph-admin from 192.168.122.100 port 42692 ssh2: RSA SHA256:Y9YAWoV/FtycHMpA7ujLfTFoetVzWSRVzNMfrWybZ/s
Jan 23 18:34:36 compute-0 systemd-logind[792]: New session 25 of user ceph-admin.
Jan 23 18:34:36 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Jan 23 18:34:36 compute-0 sshd-session[76584]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 18:34:36 compute-0 sudo[76588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:36 compute-0 sudo[76588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:36 compute-0 sudo[76588]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:36 compute-0 sshd-session[76613]: Accepted publickey for ceph-admin from 192.168.122.100 port 42696 ssh2: RSA SHA256:Y9YAWoV/FtycHMpA7ujLfTFoetVzWSRVzNMfrWybZ/s
Jan 23 18:34:36 compute-0 systemd-logind[792]: New session 26 of user ceph-admin.
Jan 23 18:34:36 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Jan 23 18:34:36 compute-0 sshd-session[76613]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 18:34:36 compute-0 sudo[76617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:36 compute-0 sudo[76617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:36 compute-0 sudo[76617]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052735 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:34:36 compute-0 sshd-session[76642]: Accepted publickey for ceph-admin from 192.168.122.100 port 42712 ssh2: RSA SHA256:Y9YAWoV/FtycHMpA7ujLfTFoetVzWSRVzNMfrWybZ/s
Jan 23 18:34:36 compute-0 systemd-logind[792]: New session 27 of user ceph-admin.
Jan 23 18:34:36 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Jan 23 18:34:36 compute-0 sshd-session[76642]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 18:34:36 compute-0 ceph-mgr[75390]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 18:34:36 compute-0 sudo[76646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Jan 23 18:34:36 compute-0 sudo[76646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:36 compute-0 sudo[76646]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:37 compute-0 sshd-session[76671]: Accepted publickey for ceph-admin from 192.168.122.100 port 42716 ssh2: RSA SHA256:Y9YAWoV/FtycHMpA7ujLfTFoetVzWSRVzNMfrWybZ/s
Jan 23 18:34:37 compute-0 systemd-logind[792]: New session 28 of user ceph-admin.
Jan 23 18:34:37 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Jan 23 18:34:37 compute-0 sshd-session[76671]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 18:34:37 compute-0 sudo[76675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:37 compute-0 sudo[76675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:37 compute-0 sudo[76675]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:37 compute-0 sshd-session[76700]: Accepted publickey for ceph-admin from 192.168.122.100 port 42720 ssh2: RSA SHA256:Y9YAWoV/FtycHMpA7ujLfTFoetVzWSRVzNMfrWybZ/s
Jan 23 18:34:37 compute-0 systemd-logind[792]: New session 29 of user ceph-admin.
Jan 23 18:34:37 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Jan 23 18:34:37 compute-0 sshd-session[76700]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 18:34:37 compute-0 sudo[76704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Jan 23 18:34:37 compute-0 sudo[76704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:37 compute-0 sudo[76704]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:37 compute-0 sshd-session[76729]: Accepted publickey for ceph-admin from 192.168.122.100 port 49040 ssh2: RSA SHA256:Y9YAWoV/FtycHMpA7ujLfTFoetVzWSRVzNMfrWybZ/s
Jan 23 18:34:37 compute-0 systemd-logind[792]: New session 30 of user ceph-admin.
Jan 23 18:34:37 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Jan 23 18:34:37 compute-0 sshd-session[76729]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 18:34:38 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:34:38 compute-0 ceph-mgr[75390]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 18:34:39 compute-0 sshd-session[76756]: Accepted publickey for ceph-admin from 192.168.122.100 port 49050 ssh2: RSA SHA256:Y9YAWoV/FtycHMpA7ujLfTFoetVzWSRVzNMfrWybZ/s
Jan 23 18:34:39 compute-0 systemd-logind[792]: New session 31 of user ceph-admin.
Jan 23 18:34:39 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Jan 23 18:34:39 compute-0 sshd-session[76756]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 18:34:39 compute-0 sudo[76760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Jan 23 18:34:39 compute-0 sudo[76760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:39 compute-0 sudo[76760]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:39 compute-0 sshd-session[76785]: Accepted publickey for ceph-admin from 192.168.122.100 port 49054 ssh2: RSA SHA256:Y9YAWoV/FtycHMpA7ujLfTFoetVzWSRVzNMfrWybZ/s
Jan 23 18:34:39 compute-0 systemd-logind[792]: New session 32 of user ceph-admin.
Jan 23 18:34:39 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Jan 23 18:34:39 compute-0 sshd-session[76785]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 18:34:39 compute-0 sudo[76789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 23 18:34:39 compute-0 sudo[76789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:39 compute-0 sudo[76789]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 23 18:34:39 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:39 compute-0 ceph-mgr[75390]: [cephadm INFO root] Added host compute-0
Jan 23 18:34:39 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 23 18:34:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 23 18:34:39 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 23 18:34:39 compute-0 focused_antonelli[76450]: Added host 'compute-0' with addr '192.168.122.100'
Jan 23 18:34:39 compute-0 systemd[1]: libpod-2fe220548c5b073161f4804fc6ede7bf1f628734d906d7ce530c269a9e84af64.scope: Deactivated successfully.
Jan 23 18:34:39 compute-0 podman[76433]: 2026-01-23 18:34:39.913757007 +0000 UTC m=+5.541776984 container died 2fe220548c5b073161f4804fc6ede7bf1f628734d906d7ce530c269a9e84af64 (image=quay.io/ceph/ceph:v20, name=focused_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:39 compute-0 sudo[76832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:34:39 compute-0 sudo[76832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:39 compute-0 sudo[76832]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef8833faf19f5285761078a52aa38ac9f0d68e1be5d3752aa3aca32ed284e981-merged.mount: Deactivated successfully.
Jan 23 18:34:39 compute-0 podman[76433]: 2026-01-23 18:34:39.971505255 +0000 UTC m=+5.599525212 container remove 2fe220548c5b073161f4804fc6ede7bf1f628734d906d7ce530c269a9e84af64 (image=quay.io/ceph/ceph:v20, name=focused_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:39 compute-0 systemd[1]: libpod-conmon-2fe220548c5b073161f4804fc6ede7bf1f628734d906d7ce530c269a9e84af64.scope: Deactivated successfully.
Jan 23 18:34:40 compute-0 sudo[76868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 pull
Jan 23 18:34:40 compute-0 sudo[76868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:40 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:34:40 compute-0 podman[76892]: 2026-01-23 18:34:40.035456587 +0000 UTC m=+0.043461403 container create d6c62749a3fad20158e852d8ee35d52b615275a3195c23efc70494fff5803df4 (image=quay.io/ceph/ceph:v20, name=epic_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 23 18:34:40 compute-0 systemd[1]: Started libpod-conmon-d6c62749a3fad20158e852d8ee35d52b615275a3195c23efc70494fff5803df4.scope.
Jan 23 18:34:40 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6099fd46e3125f142bd821710c3c9c59c3aa86c81d1761f57b7e475fd5ae7ce3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6099fd46e3125f142bd821710c3c9c59c3aa86c81d1761f57b7e475fd5ae7ce3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6099fd46e3125f142bd821710c3c9c59c3aa86c81d1761f57b7e475fd5ae7ce3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:40 compute-0 podman[76892]: 2026-01-23 18:34:40.017727085 +0000 UTC m=+0.025731921 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:40 compute-0 podman[76892]: 2026-01-23 18:34:40.126882943 +0000 UTC m=+0.134887779 container init d6c62749a3fad20158e852d8ee35d52b615275a3195c23efc70494fff5803df4 (image=quay.io/ceph/ceph:v20, name=epic_elgamal, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:40 compute-0 podman[76892]: 2026-01-23 18:34:40.133960409 +0000 UTC m=+0.141965215 container start d6c62749a3fad20158e852d8ee35d52b615275a3195c23efc70494fff5803df4 (image=quay.io/ceph/ceph:v20, name=epic_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:40 compute-0 podman[76892]: 2026-01-23 18:34:40.149163378 +0000 UTC m=+0.157168224 container attach d6c62749a3fad20158e852d8ee35d52b615275a3195c23efc70494fff5803df4 (image=quay.io/ceph/ceph:v20, name=epic_elgamal, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 18:34:40 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:40 compute-0 ceph-mgr[75390]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 23 18:34:40 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 23 18:34:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 23 18:34:40 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:40 compute-0 epic_elgamal[76913]: Scheduled mon update...
Jan 23 18:34:40 compute-0 systemd[1]: libpod-d6c62749a3fad20158e852d8ee35d52b615275a3195c23efc70494fff5803df4.scope: Deactivated successfully.
Jan 23 18:34:40 compute-0 podman[76892]: 2026-01-23 18:34:40.57139036 +0000 UTC m=+0.579395206 container died d6c62749a3fad20158e852d8ee35d52b615275a3195c23efc70494fff5803df4 (image=quay.io/ceph/ceph:v20, name=epic_elgamal, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 18:34:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-6099fd46e3125f142bd821710c3c9c59c3aa86c81d1761f57b7e475fd5ae7ce3-merged.mount: Deactivated successfully.
Jan 23 18:34:40 compute-0 podman[76892]: 2026-01-23 18:34:40.68185241 +0000 UTC m=+0.689857256 container remove d6c62749a3fad20158e852d8ee35d52b615275a3195c23efc70494fff5803df4 (image=quay.io/ceph/ceph:v20, name=epic_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:40 compute-0 systemd[1]: libpod-conmon-d6c62749a3fad20158e852d8ee35d52b615275a3195c23efc70494fff5803df4.scope: Deactivated successfully.
Jan 23 18:34:40 compute-0 podman[76977]: 2026-01-23 18:34:40.777707727 +0000 UTC m=+0.069526643 container create acda0a601d0de01455f5025c29703e44bac0a678af9d0cf69314315f2c6d5af4 (image=quay.io/ceph/ceph:v20, name=stupefied_mcclintock, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 23 18:34:40 compute-0 podman[76929]: 2026-01-23 18:34:40.794074444 +0000 UTC m=+0.560400113 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:40 compute-0 systemd[1]: Started libpod-conmon-acda0a601d0de01455f5025c29703e44bac0a678af9d0cf69314315f2c6d5af4.scope.
Jan 23 18:34:40 compute-0 podman[76977]: 2026-01-23 18:34:40.735509786 +0000 UTC m=+0.027328722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:40 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:40 compute-0 ceph-mgr[75390]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 18:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d213b741d04d660f622cedeefcfa828b93d629919297aae381ab5aa8dc5ec4cc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d213b741d04d660f622cedeefcfa828b93d629919297aae381ab5aa8dc5ec4cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d213b741d04d660f622cedeefcfa828b93d629919297aae381ab5aa8dc5ec4cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:40 compute-0 podman[76977]: 2026-01-23 18:34:40.868251111 +0000 UTC m=+0.160070047 container init acda0a601d0de01455f5025c29703e44bac0a678af9d0cf69314315f2c6d5af4 (image=quay.io/ceph/ceph:v20, name=stupefied_mcclintock, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 18:34:40 compute-0 podman[76977]: 2026-01-23 18:34:40.873105322 +0000 UTC m=+0.164924248 container start acda0a601d0de01455f5025c29703e44bac0a678af9d0cf69314315f2c6d5af4 (image=quay.io/ceph/ceph:v20, name=stupefied_mcclintock, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 18:34:40 compute-0 podman[76977]: 2026-01-23 18:34:40.876916167 +0000 UTC m=+0.168735103 container attach acda0a601d0de01455f5025c29703e44bac0a678af9d0cf69314315f2c6d5af4 (image=quay.io/ceph/ceph:v20, name=stupefied_mcclintock, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:40 compute-0 ceph-mon[75097]: Added host compute-0
Jan 23 18:34:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 23 18:34:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:40 compute-0 podman[77009]: 2026-01-23 18:34:40.898161996 +0000 UTC m=+0.042852649 container create 6a578b97f28216324b699d4c8738f78841dcafc5bce375d54a7ed5e666492145 (image=quay.io/ceph/ceph:v20, name=dreamy_burnell, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:40 compute-0 systemd[1]: Started libpod-conmon-6a578b97f28216324b699d4c8738f78841dcafc5bce375d54a7ed5e666492145.scope.
Jan 23 18:34:40 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:40 compute-0 podman[77009]: 2026-01-23 18:34:40.959434551 +0000 UTC m=+0.104125194 container init 6a578b97f28216324b699d4c8738f78841dcafc5bce375d54a7ed5e666492145 (image=quay.io/ceph/ceph:v20, name=dreamy_burnell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Jan 23 18:34:40 compute-0 podman[77009]: 2026-01-23 18:34:40.964526967 +0000 UTC m=+0.109217610 container start 6a578b97f28216324b699d4c8738f78841dcafc5bce375d54a7ed5e666492145 (image=quay.io/ceph/ceph:v20, name=dreamy_burnell, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 18:34:40 compute-0 podman[77009]: 2026-01-23 18:34:40.968281191 +0000 UTC m=+0.112971854 container attach 6a578b97f28216324b699d4c8738f78841dcafc5bce375d54a7ed5e666492145 (image=quay.io/ceph/ceph:v20, name=dreamy_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Jan 23 18:34:40 compute-0 podman[77009]: 2026-01-23 18:34:40.875965063 +0000 UTC m=+0.020655726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:41 compute-0 dreamy_burnell[77027]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 23 18:34:41 compute-0 systemd[1]: libpod-6a578b97f28216324b699d4c8738f78841dcafc5bce375d54a7ed5e666492145.scope: Deactivated successfully.
Jan 23 18:34:41 compute-0 conmon[77027]: conmon 6a578b97f28216324b69 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6a578b97f28216324b699d4c8738f78841dcafc5bce375d54a7ed5e666492145.scope/container/memory.events
Jan 23 18:34:41 compute-0 podman[77009]: 2026-01-23 18:34:41.055706308 +0000 UTC m=+0.200396941 container died 6a578b97f28216324b699d4c8738f78841dcafc5bce375d54a7ed5e666492145 (image=quay.io/ceph/ceph:v20, name=dreamy_burnell, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 18:34:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2450d30f6daafb2c6b581d2d5b8c2336bc92a7cad98cd03a84e4d76b497ef2a3-merged.mount: Deactivated successfully.
Jan 23 18:34:41 compute-0 podman[77009]: 2026-01-23 18:34:41.097501558 +0000 UTC m=+0.242192201 container remove 6a578b97f28216324b699d4c8738f78841dcafc5bce375d54a7ed5e666492145 (image=quay.io/ceph/ceph:v20, name=dreamy_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 18:34:41 compute-0 systemd[1]: libpod-conmon-6a578b97f28216324b699d4c8738f78841dcafc5bce375d54a7ed5e666492145.scope: Deactivated successfully.
Jan 23 18:34:41 compute-0 sudo[76868]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 23 18:34:41 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:41 compute-0 sudo[77061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:34:41 compute-0 sudo[77061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:41 compute-0 sudo[77061]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:41 compute-0 sudo[77086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 23 18:34:41 compute-0 sudo[77086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:41 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:41 compute-0 ceph-mgr[75390]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 23 18:34:41 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 23 18:34:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 23 18:34:41 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:41 compute-0 stupefied_mcclintock[77004]: Scheduled mgr update...
Jan 23 18:34:41 compute-0 systemd[1]: libpod-acda0a601d0de01455f5025c29703e44bac0a678af9d0cf69314315f2c6d5af4.scope: Deactivated successfully.
Jan 23 18:34:41 compute-0 podman[76977]: 2026-01-23 18:34:41.360097296 +0000 UTC m=+0.651916232 container died acda0a601d0de01455f5025c29703e44bac0a678af9d0cf69314315f2c6d5af4 (image=quay.io/ceph/ceph:v20, name=stupefied_mcclintock, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 18:34:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d213b741d04d660f622cedeefcfa828b93d629919297aae381ab5aa8dc5ec4cc-merged.mount: Deactivated successfully.
Jan 23 18:34:41 compute-0 podman[76977]: 2026-01-23 18:34:41.399947398 +0000 UTC m=+0.691766314 container remove acda0a601d0de01455f5025c29703e44bac0a678af9d0cf69314315f2c6d5af4 (image=quay.io/ceph/ceph:v20, name=stupefied_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:41 compute-0 systemd[1]: libpod-conmon-acda0a601d0de01455f5025c29703e44bac0a678af9d0cf69314315f2c6d5af4.scope: Deactivated successfully.
Jan 23 18:34:41 compute-0 podman[77126]: 2026-01-23 18:34:41.458345992 +0000 UTC m=+0.038555790 container create b570af6e1dd86847d73a47035f9efb695299cb037f9d000d6d07a8ccc7abd604 (image=quay.io/ceph/ceph:v20, name=interesting_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 18:34:41 compute-0 systemd[1]: Started libpod-conmon-b570af6e1dd86847d73a47035f9efb695299cb037f9d000d6d07a8ccc7abd604.scope.
Jan 23 18:34:41 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02a63b6c02387f282627413d49e81b4036ce29b8077f55bfa719e66e4d37b479/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02a63b6c02387f282627413d49e81b4036ce29b8077f55bfa719e66e4d37b479/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02a63b6c02387f282627413d49e81b4036ce29b8077f55bfa719e66e4d37b479/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:41 compute-0 podman[77126]: 2026-01-23 18:34:41.44096924 +0000 UTC m=+0.021179048 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:41 compute-0 podman[77126]: 2026-01-23 18:34:41.538482297 +0000 UTC m=+0.118692155 container init b570af6e1dd86847d73a47035f9efb695299cb037f9d000d6d07a8ccc7abd604 (image=quay.io/ceph/ceph:v20, name=interesting_rhodes, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:34:41 compute-0 podman[77126]: 2026-01-23 18:34:41.545196245 +0000 UTC m=+0.125406033 container start b570af6e1dd86847d73a47035f9efb695299cb037f9d000d6d07a8ccc7abd604 (image=quay.io/ceph/ceph:v20, name=interesting_rhodes, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 23 18:34:41 compute-0 podman[77126]: 2026-01-23 18:34:41.548438565 +0000 UTC m=+0.128648353 container attach b570af6e1dd86847d73a47035f9efb695299cb037f9d000d6d07a8ccc7abd604 (image=quay.io/ceph/ceph:v20, name=interesting_rhodes, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 23 18:34:41 compute-0 sudo[77086]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:34:41 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:41 compute-0 sudo[77166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:34:41 compute-0 sudo[77166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:41 compute-0 sudo[77166]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054704 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:34:41 compute-0 sudo[77210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 23 18:34:41 compute-0 sudo[77210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:41 compute-0 ceph-mon[75097]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:41 compute-0 ceph-mon[75097]: Saving service mon spec with placement count:5
Jan 23 18:34:41 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:41 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:41 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:41 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:41 compute-0 ceph-mgr[75390]: [cephadm INFO root] Saving service crash spec with placement *
Jan 23 18:34:41 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 23 18:34:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 23 18:34:41 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:41 compute-0 interesting_rhodes[77150]: Scheduled crash update...
Jan 23 18:34:42 compute-0 systemd[1]: libpod-b570af6e1dd86847d73a47035f9efb695299cb037f9d000d6d07a8ccc7abd604.scope: Deactivated successfully.
Jan 23 18:34:42 compute-0 podman[77126]: 2026-01-23 18:34:42.002442189 +0000 UTC m=+0.582651977 container died b570af6e1dd86847d73a47035f9efb695299cb037f9d000d6d07a8ccc7abd604 (image=quay.io/ceph/ceph:v20, name=interesting_rhodes, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:42 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:34:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-02a63b6c02387f282627413d49e81b4036ce29b8077f55bfa719e66e4d37b479-merged.mount: Deactivated successfully.
Jan 23 18:34:42 compute-0 podman[77126]: 2026-01-23 18:34:42.071065388 +0000 UTC m=+0.651275176 container remove b570af6e1dd86847d73a47035f9efb695299cb037f9d000d6d07a8ccc7abd604 (image=quay.io/ceph/ceph:v20, name=interesting_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 18:34:42 compute-0 systemd[1]: libpod-conmon-b570af6e1dd86847d73a47035f9efb695299cb037f9d000d6d07a8ccc7abd604.scope: Deactivated successfully.
Jan 23 18:34:42 compute-0 podman[77274]: 2026-01-23 18:34:42.138212339 +0000 UTC m=+0.043958396 container create 2beaed7a9863f6ffc798e16fbdf03f2bdc41efd0bf14bf54a82c54e5d50648b2 (image=quay.io/ceph/ceph:v20, name=gracious_tesla, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 18:34:42 compute-0 systemd[1]: Started libpod-conmon-2beaed7a9863f6ffc798e16fbdf03f2bdc41efd0bf14bf54a82c54e5d50648b2.scope.
Jan 23 18:34:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d627ddbba3e1a9d96499a587d22940718e52e6347362f94fae7cbd98284925/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d627ddbba3e1a9d96499a587d22940718e52e6347362f94fae7cbd98284925/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d627ddbba3e1a9d96499a587d22940718e52e6347362f94fae7cbd98284925/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:42 compute-0 podman[77298]: 2026-01-23 18:34:42.209790531 +0000 UTC m=+0.061154314 container exec 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:42 compute-0 podman[77274]: 2026-01-23 18:34:42.119213186 +0000 UTC m=+0.024959273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:42 compute-0 podman[77274]: 2026-01-23 18:34:42.221079542 +0000 UTC m=+0.126825629 container init 2beaed7a9863f6ffc798e16fbdf03f2bdc41efd0bf14bf54a82c54e5d50648b2 (image=quay.io/ceph/ceph:v20, name=gracious_tesla, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 18:34:42 compute-0 podman[77274]: 2026-01-23 18:34:42.226476397 +0000 UTC m=+0.132222474 container start 2beaed7a9863f6ffc798e16fbdf03f2bdc41efd0bf14bf54a82c54e5d50648b2 (image=quay.io/ceph/ceph:v20, name=gracious_tesla, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:34:42 compute-0 podman[77274]: 2026-01-23 18:34:42.230447466 +0000 UTC m=+0.136193553 container attach 2beaed7a9863f6ffc798e16fbdf03f2bdc41efd0bf14bf54a82c54e5d50648b2 (image=quay.io/ceph/ceph:v20, name=gracious_tesla, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 23 18:34:42 compute-0 podman[77298]: 2026-01-23 18:34:42.324915057 +0000 UTC m=+0.176278850 container exec_died 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:34:42 compute-0 sudo[77210]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:34:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:42 compute-0 sudo[77404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:34:42 compute-0 sudo[77404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 23 18:34:42 compute-0 sudo[77404]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4118591596' entity='client.admin' 
Jan 23 18:34:42 compute-0 systemd[1]: libpod-2beaed7a9863f6ffc798e16fbdf03f2bdc41efd0bf14bf54a82c54e5d50648b2.scope: Deactivated successfully.
Jan 23 18:34:42 compute-0 podman[77274]: 2026-01-23 18:34:42.649662853 +0000 UTC m=+0.555408940 container died 2beaed7a9863f6ffc798e16fbdf03f2bdc41efd0bf14bf54a82c54e5d50648b2 (image=quay.io/ceph/ceph:v20, name=gracious_tesla, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-63d627ddbba3e1a9d96499a587d22940718e52e6347362f94fae7cbd98284925-merged.mount: Deactivated successfully.
Jan 23 18:34:42 compute-0 sudo[77430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:34:42 compute-0 sudo[77430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:42 compute-0 podman[77274]: 2026-01-23 18:34:42.691472494 +0000 UTC m=+0.597218591 container remove 2beaed7a9863f6ffc798e16fbdf03f2bdc41efd0bf14bf54a82c54e5d50648b2 (image=quay.io/ceph/ceph:v20, name=gracious_tesla, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:34:42 compute-0 systemd[1]: libpod-conmon-2beaed7a9863f6ffc798e16fbdf03f2bdc41efd0bf14bf54a82c54e5d50648b2.scope: Deactivated successfully.
Jan 23 18:34:42 compute-0 podman[77467]: 2026-01-23 18:34:42.746590076 +0000 UTC m=+0.037273509 container create 622f72e858e4593f8e79bf4d14970d7926149b3b6aa9651cfda2b3eb573b135b (image=quay.io/ceph/ceph:v20, name=thirsty_mendel, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:42 compute-0 systemd[1]: Started libpod-conmon-622f72e858e4593f8e79bf4d14970d7926149b3b6aa9651cfda2b3eb573b135b.scope.
Jan 23 18:34:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22a0ec64eb453e26aa6ce89ebae2fff79c35a94e2f9967563744b1a73bfcd33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22a0ec64eb453e26aa6ce89ebae2fff79c35a94e2f9967563744b1a73bfcd33/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22a0ec64eb453e26aa6ce89ebae2fff79c35a94e2f9967563744b1a73bfcd33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:42 compute-0 podman[77467]: 2026-01-23 18:34:42.823711725 +0000 UTC m=+0.114395168 container init 622f72e858e4593f8e79bf4d14970d7926149b3b6aa9651cfda2b3eb573b135b (image=quay.io/ceph/ceph:v20, name=thirsty_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 18:34:42 compute-0 podman[77467]: 2026-01-23 18:34:42.728893136 +0000 UTC m=+0.019576579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:42 compute-0 podman[77467]: 2026-01-23 18:34:42.830917705 +0000 UTC m=+0.121601128 container start 622f72e858e4593f8e79bf4d14970d7926149b3b6aa9651cfda2b3eb573b135b (image=quay.io/ceph/ceph:v20, name=thirsty_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 18:34:42 compute-0 podman[77467]: 2026-01-23 18:34:42.835585562 +0000 UTC m=+0.126268985 container attach 622f72e858e4593f8e79bf4d14970d7926149b3b6aa9651cfda2b3eb573b135b (image=quay.io/ceph/ceph:v20, name=thirsty_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:42 compute-0 ceph-mgr[75390]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 18:34:42 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77499 (sysctl)
Jan 23 18:34:42 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 23 18:34:42 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 23 18:34:42 compute-0 ceph-mon[75097]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:42 compute-0 ceph-mon[75097]: Saving service mgr spec with placement count:2
Jan 23 18:34:42 compute-0 ceph-mon[75097]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:42 compute-0 ceph-mon[75097]: Saving service crash spec with placement *
Jan 23 18:34:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:42 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4118591596' entity='client.admin' 
Jan 23 18:34:43 compute-0 sudo[77430]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:43 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 23 18:34:43 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:43 compute-0 systemd[1]: libpod-622f72e858e4593f8e79bf4d14970d7926149b3b6aa9651cfda2b3eb573b135b.scope: Deactivated successfully.
Jan 23 18:34:43 compute-0 podman[77467]: 2026-01-23 18:34:43.303387178 +0000 UTC m=+0.594070661 container died 622f72e858e4593f8e79bf4d14970d7926149b3b6aa9651cfda2b3eb573b135b (image=quay.io/ceph/ceph:v20, name=thirsty_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:43 compute-0 sudo[77540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:34:43 compute-0 sudo[77540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:43 compute-0 sudo[77540]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f22a0ec64eb453e26aa6ce89ebae2fff79c35a94e2f9967563744b1a73bfcd33-merged.mount: Deactivated successfully.
Jan 23 18:34:43 compute-0 podman[77467]: 2026-01-23 18:34:43.352640474 +0000 UTC m=+0.643323897 container remove 622f72e858e4593f8e79bf4d14970d7926149b3b6aa9651cfda2b3eb573b135b (image=quay.io/ceph/ceph:v20, name=thirsty_mendel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:43 compute-0 systemd[1]: libpod-conmon-622f72e858e4593f8e79bf4d14970d7926149b3b6aa9651cfda2b3eb573b135b.scope: Deactivated successfully.
Jan 23 18:34:43 compute-0 sudo[77579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 23 18:34:43 compute-0 sudo[77579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:43 compute-0 podman[77600]: 2026-01-23 18:34:43.424508703 +0000 UTC m=+0.047284947 container create b55a28d7499253ff0fd953253555afc8319d21c59334a77bb3c5a6b5b8c80679 (image=quay.io/ceph/ceph:v20, name=nostalgic_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:43 compute-0 systemd[1]: Started libpod-conmon-b55a28d7499253ff0fd953253555afc8319d21c59334a77bb3c5a6b5b8c80679.scope.
Jan 23 18:34:43 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bec63d72f3bfd12898f976ada1a08c907d143a4370ee4af4da8d82b2b15f47/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bec63d72f3bfd12898f976ada1a08c907d143a4370ee4af4da8d82b2b15f47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bec63d72f3bfd12898f976ada1a08c907d143a4370ee4af4da8d82b2b15f47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:43 compute-0 podman[77600]: 2026-01-23 18:34:43.49664991 +0000 UTC m=+0.119426204 container init b55a28d7499253ff0fd953253555afc8319d21c59334a77bb3c5a6b5b8c80679 (image=quay.io/ceph/ceph:v20, name=nostalgic_keldysh, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:43 compute-0 podman[77600]: 2026-01-23 18:34:43.403304666 +0000 UTC m=+0.026080960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:43 compute-0 podman[77600]: 2026-01-23 18:34:43.50348666 +0000 UTC m=+0.126262934 container start b55a28d7499253ff0fd953253555afc8319d21c59334a77bb3c5a6b5b8c80679 (image=quay.io/ceph/ceph:v20, name=nostalgic_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 23 18:34:43 compute-0 podman[77600]: 2026-01-23 18:34:43.508453733 +0000 UTC m=+0.131230027 container attach b55a28d7499253ff0fd953253555afc8319d21c59334a77bb3c5a6b5b8c80679 (image=quay.io/ceph/ceph:v20, name=nostalgic_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 23 18:34:43 compute-0 sudo[77579]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:34:43 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:43 compute-0 sudo[77661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:34:43 compute-0 sudo[77661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:43 compute-0 sudo[77661]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:43 compute-0 sudo[77686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- inventory --format=json-pretty --filter-for-batch
Jan 23 18:34:43 compute-0 sudo[77686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:43 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 23 18:34:43 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:43 compute-0 ceph-mgr[75390]: [cephadm INFO root] Added label _admin to host compute-0
Jan 23 18:34:43 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 23 18:34:43 compute-0 nostalgic_keldysh[77620]: Added label _admin to host compute-0
Jan 23 18:34:43 compute-0 systemd[1]: libpod-b55a28d7499253ff0fd953253555afc8319d21c59334a77bb3c5a6b5b8c80679.scope: Deactivated successfully.
Jan 23 18:34:43 compute-0 podman[77600]: 2026-01-23 18:34:43.98590505 +0000 UTC m=+0.608681294 container died b55a28d7499253ff0fd953253555afc8319d21c59334a77bb3c5a6b5b8c80679 (image=quay.io/ceph/ceph:v20, name=nostalgic_keldysh, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4bec63d72f3bfd12898f976ada1a08c907d143a4370ee4af4da8d82b2b15f47-merged.mount: Deactivated successfully.
Jan 23 18:34:44 compute-0 podman[77600]: 2026-01-23 18:34:44.024363358 +0000 UTC m=+0.647139602 container remove b55a28d7499253ff0fd953253555afc8319d21c59334a77bb3c5a6b5b8c80679 (image=quay.io/ceph/ceph:v20, name=nostalgic_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:44 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:34:44 compute-0 systemd[1]: libpod-conmon-b55a28d7499253ff0fd953253555afc8319d21c59334a77bb3c5a6b5b8c80679.scope: Deactivated successfully.
Jan 23 18:34:44 compute-0 podman[77732]: 2026-01-23 18:34:44.085705626 +0000 UTC m=+0.036988853 container create 0143096d2335558afe65e73a7ad1df5e385a46298a99b1aa864a9a2101965c91 (image=quay.io/ceph/ceph:v20, name=competent_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:44 compute-0 podman[77734]: 2026-01-23 18:34:44.105151319 +0000 UTC m=+0.046802656 container create 389f60d9d2077b0f00ea439a6d77c8145104b89dbd3003d4edd6a31e9d374af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_wing, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 23 18:34:44 compute-0 systemd[1]: Started libpod-conmon-0143096d2335558afe65e73a7ad1df5e385a46298a99b1aa864a9a2101965c91.scope.
Jan 23 18:34:44 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:44 compute-0 systemd[1]: Started libpod-conmon-389f60d9d2077b0f00ea439a6d77c8145104b89dbd3003d4edd6a31e9d374af5.scope.
Jan 23 18:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9637e0a493d7ab4b426c664c3fbc9aa3c58530cadbd6ec4ff70136b2951140f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9637e0a493d7ab4b426c664c3fbc9aa3c58530cadbd6ec4ff70136b2951140f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9637e0a493d7ab4b426c664c3fbc9aa3c58530cadbd6ec4ff70136b2951140f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:44 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:44 compute-0 podman[77732]: 2026-01-23 18:34:44.161635776 +0000 UTC m=+0.112919033 container init 0143096d2335558afe65e73a7ad1df5e385a46298a99b1aa864a9a2101965c91 (image=quay.io/ceph/ceph:v20, name=competent_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:34:44 compute-0 podman[77732]: 2026-01-23 18:34:44.069387729 +0000 UTC m=+0.020670996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:44 compute-0 podman[77732]: 2026-01-23 18:34:44.167647115 +0000 UTC m=+0.118930352 container start 0143096d2335558afe65e73a7ad1df5e385a46298a99b1aa864a9a2101965c91 (image=quay.io/ceph/ceph:v20, name=competent_mclean, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:34:44 compute-0 podman[77734]: 2026-01-23 18:34:44.170485396 +0000 UTC m=+0.112136793 container init 389f60d9d2077b0f00ea439a6d77c8145104b89dbd3003d4edd6a31e9d374af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_wing, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:34:44 compute-0 podman[77732]: 2026-01-23 18:34:44.174029715 +0000 UTC m=+0.125312942 container attach 0143096d2335558afe65e73a7ad1df5e385a46298a99b1aa864a9a2101965c91 (image=quay.io/ceph/ceph:v20, name=competent_mclean, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:44 compute-0 podman[77734]: 2026-01-23 18:34:44.175709486 +0000 UTC m=+0.117360823 container start 389f60d9d2077b0f00ea439a6d77c8145104b89dbd3003d4edd6a31e9d374af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_wing, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 23 18:34:44 compute-0 podman[77734]: 2026-01-23 18:34:44.085465359 +0000 UTC m=+0.027116736 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:34:44 compute-0 podman[77734]: 2026-01-23 18:34:44.179226803 +0000 UTC m=+0.120878180 container attach 389f60d9d2077b0f00ea439a6d77c8145104b89dbd3003d4edd6a31e9d374af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:44 compute-0 gifted_wing[77769]: 167 167
Jan 23 18:34:44 compute-0 systemd[1]: libpod-389f60d9d2077b0f00ea439a6d77c8145104b89dbd3003d4edd6a31e9d374af5.scope: Deactivated successfully.
Jan 23 18:34:44 compute-0 podman[77734]: 2026-01-23 18:34:44.181333926 +0000 UTC m=+0.122985303 container died 389f60d9d2077b0f00ea439a6d77c8145104b89dbd3003d4edd6a31e9d374af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_wing, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 18:34:44 compute-0 podman[77734]: 2026-01-23 18:34:44.231882675 +0000 UTC m=+0.173534042 container remove 389f60d9d2077b0f00ea439a6d77c8145104b89dbd3003d4edd6a31e9d374af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_wing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 23 18:34:44 compute-0 systemd[1]: libpod-conmon-389f60d9d2077b0f00ea439a6d77c8145104b89dbd3003d4edd6a31e9d374af5.scope: Deactivated successfully.
Jan 23 18:34:44 compute-0 ceph-mon[75097]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:44 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:44 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:44 compute-0 ceph-mon[75097]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:44 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:44 compute-0 ceph-mon[75097]: Added label _admin to host compute-0
Jan 23 18:34:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccfc44ab1607857f5ff4abc35dcd3b646917270bfae3e9547a5066341661fd5c-merged.mount: Deactivated successfully.
Jan 23 18:34:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 23 18:34:44 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1163296314' entity='client.admin' 
Jan 23 18:34:44 compute-0 competent_mclean[77764]: set mgr/dashboard/cluster/status
Jan 23 18:34:44 compute-0 systemd[1]: libpod-0143096d2335558afe65e73a7ad1df5e385a46298a99b1aa864a9a2101965c91.scope: Deactivated successfully.
Jan 23 18:34:44 compute-0 podman[77732]: 2026-01-23 18:34:44.763636204 +0000 UTC m=+0.714919461 container died 0143096d2335558afe65e73a7ad1df5e385a46298a99b1aa864a9a2101965c91 (image=quay.io/ceph/ceph:v20, name=competent_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9637e0a493d7ab4b426c664c3fbc9aa3c58530cadbd6ec4ff70136b2951140f-merged.mount: Deactivated successfully.
Jan 23 18:34:44 compute-0 podman[77732]: 2026-01-23 18:34:44.803291721 +0000 UTC m=+0.754574958 container remove 0143096d2335558afe65e73a7ad1df5e385a46298a99b1aa864a9a2101965c91 (image=quay.io/ceph/ceph:v20, name=competent_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 23 18:34:44 compute-0 systemd[1]: libpod-conmon-0143096d2335558afe65e73a7ad1df5e385a46298a99b1aa864a9a2101965c91.scope: Deactivated successfully.
Jan 23 18:34:44 compute-0 ceph-mgr[75390]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 18:34:44 compute-0 systemd[1]: Reloading.
Jan 23 18:34:44 compute-0 systemd-rc-local-generator[77852]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:34:44 compute-0 systemd-sysv-generator[77855]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:34:45 compute-0 sudo[74049]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:45 compute-0 podman[77868]: 2026-01-23 18:34:45.304290854 +0000 UTC m=+0.040014537 container create 3a6017bc2e956267e44fe48bca19709b08060f5a945256d89070e595a7f1023d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_elion, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:45 compute-0 systemd[1]: Started libpod-conmon-3a6017bc2e956267e44fe48bca19709b08060f5a945256d89070e595a7f1023d.scope.
Jan 23 18:34:45 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4e15f4e0d2a343084f1c7d8319e08a23601da5b2cdfa6b0f56b1b8d33e5b467/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4e15f4e0d2a343084f1c7d8319e08a23601da5b2cdfa6b0f56b1b8d33e5b467/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4e15f4e0d2a343084f1c7d8319e08a23601da5b2cdfa6b0f56b1b8d33e5b467/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4e15f4e0d2a343084f1c7d8319e08a23601da5b2cdfa6b0f56b1b8d33e5b467/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:45 compute-0 podman[77868]: 2026-01-23 18:34:45.288114631 +0000 UTC m=+0.023838334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:34:45 compute-0 podman[77868]: 2026-01-23 18:34:45.397548677 +0000 UTC m=+0.133272390 container init 3a6017bc2e956267e44fe48bca19709b08060f5a945256d89070e595a7f1023d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_elion, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Jan 23 18:34:45 compute-0 podman[77868]: 2026-01-23 18:34:45.408535189 +0000 UTC m=+0.144258912 container start 3a6017bc2e956267e44fe48bca19709b08060f5a945256d89070e595a7f1023d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:45 compute-0 podman[77868]: 2026-01-23 18:34:45.413483903 +0000 UTC m=+0.149207596 container attach 3a6017bc2e956267e44fe48bca19709b08060f5a945256d89070e595a7f1023d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:34:45 compute-0 sudo[77912]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nojziljhycleoxfgnbfsrxdifdknhdgr ; /usr/bin/python3'
Jan 23 18:34:45 compute-0 sudo[77912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:34:45 compute-0 python3[77914]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:34:45 compute-0 boring_elion[77884]: [
Jan 23 18:34:45 compute-0 boring_elion[77884]:     {
Jan 23 18:34:45 compute-0 boring_elion[77884]:         "available": false,
Jan 23 18:34:45 compute-0 boring_elion[77884]:         "being_replaced": false,
Jan 23 18:34:45 compute-0 boring_elion[77884]:         "ceph_device_lvm": false,
Jan 23 18:34:45 compute-0 boring_elion[77884]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 23 18:34:45 compute-0 boring_elion[77884]:         "lsm_data": {},
Jan 23 18:34:45 compute-0 boring_elion[77884]:         "lvs": [],
Jan 23 18:34:45 compute-0 boring_elion[77884]:         "path": "/dev/sr0",
Jan 23 18:34:45 compute-0 boring_elion[77884]:         "rejected_reasons": [
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "Insufficient space (<5GB)",
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "Has a FileSystem"
Jan 23 18:34:45 compute-0 boring_elion[77884]:         ],
Jan 23 18:34:45 compute-0 boring_elion[77884]:         "sys_api": {
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "actuators": null,
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "device_nodes": [
Jan 23 18:34:45 compute-0 boring_elion[77884]:                 "sr0"
Jan 23 18:34:45 compute-0 boring_elion[77884]:             ],
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "devname": "sr0",
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "human_readable_size": "482.00 KB",
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "id_bus": "ata",
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "model": "QEMU DVD-ROM",
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "nr_requests": "2",
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "parent": "/dev/sr0",
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "partitions": {},
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "path": "/dev/sr0",
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "removable": "1",
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "rev": "2.5+",
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "ro": "0",
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "rotational": "1",
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "sas_address": "",
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "sas_device_handle": "",
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "scheduler_mode": "mq-deadline",
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "sectors": 0,
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "sectorsize": "2048",
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "size": 493568.0,
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "support_discard": "2048",
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "type": "disk",
Jan 23 18:34:45 compute-0 boring_elion[77884]:             "vendor": "QEMU"
Jan 23 18:34:45 compute-0 boring_elion[77884]:         }
Jan 23 18:34:45 compute-0 boring_elion[77884]:     }
Jan 23 18:34:45 compute-0 boring_elion[77884]: ]
Jan 23 18:34:45 compute-0 podman[78035]: 2026-01-23 18:34:45.855954919 +0000 UTC m=+0.021929537 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:45 compute-0 systemd[1]: libpod-3a6017bc2e956267e44fe48bca19709b08060f5a945256d89070e595a7f1023d.scope: Deactivated successfully.
Jan 23 18:34:46 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:34:46 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1163296314' entity='client.admin' 
Jan 23 18:34:46 compute-0 podman[78035]: 2026-01-23 18:34:46.111665566 +0000 UTC m=+0.277640144 container create bfd8741344f272681b7b20625c8e30487574e829d2104ffaede1acf85f8c2103 (image=quay.io/ceph/ceph:v20, name=cranky_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Jan 23 18:34:46 compute-0 podman[77868]: 2026-01-23 18:34:46.148698957 +0000 UTC m=+0.884422640 container died 3a6017bc2e956267e44fe48bca19709b08060f5a945256d89070e595a7f1023d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:46 compute-0 systemd[1]: Started libpod-conmon-bfd8741344f272681b7b20625c8e30487574e829d2104ffaede1acf85f8c2103.scope.
Jan 23 18:34:46 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4e15f4e0d2a343084f1c7d8319e08a23601da5b2cdfa6b0f56b1b8d33e5b467-merged.mount: Deactivated successfully.
Jan 23 18:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2324867ce7ca5c0411b0e7d46c80081fe86fc0cfe201e8e97b116fa5493bd2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2324867ce7ca5c0411b0e7d46c80081fe86fc0cfe201e8e97b116fa5493bd2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:46 compute-0 podman[78624]: 2026-01-23 18:34:46.19579793 +0000 UTC m=+0.211793754 container remove 3a6017bc2e956267e44fe48bca19709b08060f5a945256d89070e595a7f1023d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_elion, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:46 compute-0 systemd[1]: libpod-conmon-3a6017bc2e956267e44fe48bca19709b08060f5a945256d89070e595a7f1023d.scope: Deactivated successfully.
Jan 23 18:34:46 compute-0 podman[78035]: 2026-01-23 18:34:46.202429895 +0000 UTC m=+0.368404493 container init bfd8741344f272681b7b20625c8e30487574e829d2104ffaede1acf85f8c2103 (image=quay.io/ceph/ceph:v20, name=cranky_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 23 18:34:46 compute-0 podman[78035]: 2026-01-23 18:34:46.208932827 +0000 UTC m=+0.374907405 container start bfd8741344f272681b7b20625c8e30487574e829d2104ffaede1acf85f8c2103 (image=quay.io/ceph/ceph:v20, name=cranky_robinson, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 18:34:46 compute-0 podman[78035]: 2026-01-23 18:34:46.212969058 +0000 UTC m=+0.378943666 container attach bfd8741344f272681b7b20625c8e30487574e829d2104ffaede1acf85f8c2103 (image=quay.io/ceph/ceph:v20, name=cranky_robinson, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:46 compute-0 sudo[77686]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:34:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:34:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:34:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:34:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 23 18:34:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 23 18:34:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:34:46 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:34:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:34:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:34:46 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 23 18:34:46 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 23 18:34:46 compute-0 sudo[78646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 23 18:34:46 compute-0 sudo[78646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:46 compute-0 sudo[78646]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:46 compute-0 sudo[78690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/etc/ceph
Jan 23 18:34:46 compute-0 sudo[78690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:46 compute-0 sudo[78690]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:46 compute-0 sudo[78715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/etc/ceph/ceph.conf.new
Jan 23 18:34:46 compute-0 sudo[78715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:46 compute-0 sudo[78715]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:46 compute-0 sudo[78740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:46 compute-0 sudo[78740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:46 compute-0 sudo[78740]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:46 compute-0 sudo[78765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/etc/ceph/ceph.conf.new
Jan 23 18:34:46 compute-0 sudo[78765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:46 compute-0 sudo[78765]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 23 18:34:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1412492869' entity='client.admin' 
Jan 23 18:34:46 compute-0 sudo[78813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/etc/ceph/ceph.conf.new
Jan 23 18:34:46 compute-0 sudo[78813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:46 compute-0 sudo[78813]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:46 compute-0 systemd[1]: libpod-bfd8741344f272681b7b20625c8e30487574e829d2104ffaede1acf85f8c2103.scope: Deactivated successfully.
Jan 23 18:34:46 compute-0 podman[78035]: 2026-01-23 18:34:46.647479265 +0000 UTC m=+0.813453863 container died bfd8741344f272681b7b20625c8e30487574e829d2104ffaede1acf85f8c2103 (image=quay.io/ceph/ceph:v20, name=cranky_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae2324867ce7ca5c0411b0e7d46c80081fe86fc0cfe201e8e97b116fa5493bd2-merged.mount: Deactivated successfully.
Jan 23 18:34:46 compute-0 podman[78035]: 2026-01-23 18:34:46.691157412 +0000 UTC m=+0.857132000 container remove bfd8741344f272681b7b20625c8e30487574e829d2104ffaede1acf85f8c2103 (image=quay.io/ceph/ceph:v20, name=cranky_robinson, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 23 18:34:46 compute-0 sudo[78840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/etc/ceph/ceph.conf.new
Jan 23 18:34:46 compute-0 sudo[78840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:46 compute-0 systemd[1]: libpod-conmon-bfd8741344f272681b7b20625c8e30487574e829d2104ffaede1acf85f8c2103.scope: Deactivated successfully.
Jan 23 18:34:46 compute-0 sudo[78840]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:46 compute-0 sudo[77912]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:34:46 compute-0 sudo[78875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 23 18:34:46 compute-0 sudo[78875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:46 compute-0 sudo[78875]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:46 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config/ceph.conf
Jan 23 18:34:46 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config/ceph.conf
Jan 23 18:34:46 compute-0 sudo[78900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config
Jan 23 18:34:46 compute-0 sudo[78900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:46 compute-0 sudo[78900]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:46 compute-0 ceph-mgr[75390]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 18:34:46 compute-0 sudo[78925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config
Jan 23 18:34:46 compute-0 sudo[78925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:46 compute-0 sudo[78925]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:46 compute-0 sudo[78950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config/ceph.conf.new
Jan 23 18:34:46 compute-0 sudo[78950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:46 compute-0 sudo[78950]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:46 compute-0 sudo[78975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:46 compute-0 sudo[78975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:46 compute-0 sudo[78975]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:47 compute-0 sudo[79000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config/ceph.conf.new
Jan 23 18:34:47 compute-0 sudo[79000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:47 compute-0 sudo[79000]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:47 compute-0 sudo[79100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config/ceph.conf.new
Jan 23 18:34:47 compute-0 sudo[79100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:47 compute-0 sudo[79100]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:47 compute-0 sudo[79148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config/ceph.conf.new
Jan 23 18:34:47 compute-0 sudo[79148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:47 compute-0 sudo[79148]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 23 18:34:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:34:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:34:47 compute-0 ceph-mon[75097]: Updating compute-0:/etc/ceph/ceph.conf
Jan 23 18:34:47 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1412492869' entity='client.admin' 
Jan 23 18:34:47 compute-0 ceph-mon[75097]: Updating compute-0:/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config/ceph.conf
Jan 23 18:34:47 compute-0 sudo[79173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config/ceph.conf.new /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config/ceph.conf
Jan 23 18:34:47 compute-0 sudo[79173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:47 compute-0 sudo[79173]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:47 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 23 18:34:47 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 23 18:34:47 compute-0 sudo[79198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 23 18:34:47 compute-0 sudo[79198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:47 compute-0 sudo[79198]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:47 compute-0 sudo[79231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/etc/ceph
Jan 23 18:34:47 compute-0 sudo[79231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:47 compute-0 sudo[79231]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:47 compute-0 sudo[79277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/etc/ceph/ceph.client.admin.keyring.new
Jan 23 18:34:47 compute-0 sudo[79277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:47 compute-0 sudo[79277]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:47 compute-0 sudo[79362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyiezqjayjqgnsapunafgxurwseazgei ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769193287.0246842-36439-69589923203993/async_wrapper.py j261236930165 30 /home/zuul/.ansible/tmp/ansible-tmp-1769193287.0246842-36439-69589923203993/AnsiballZ_command.py _'
Jan 23 18:34:47 compute-0 sudo[79362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:34:47 compute-0 sudo[79328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:47 compute-0 sudo[79328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:47 compute-0 sudo[79328]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:47 compute-0 sudo[79373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/etc/ceph/ceph.client.admin.keyring.new
Jan 23 18:34:47 compute-0 sudo[79373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:47 compute-0 sudo[79373]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:47 compute-0 sudo[79421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/etc/ceph/ceph.client.admin.keyring.new
Jan 23 18:34:47 compute-0 sudo[79421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:47 compute-0 sudo[79421]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:47 compute-0 ansible-async_wrapper.py[79370]: Invoked with j261236930165 30 /home/zuul/.ansible/tmp/ansible-tmp-1769193287.0246842-36439-69589923203993/AnsiballZ_command.py _
Jan 23 18:34:47 compute-0 ansible-async_wrapper.py[79450]: Starting module and watcher
Jan 23 18:34:47 compute-0 ansible-async_wrapper.py[79450]: Start watching 79453 (30)
Jan 23 18:34:47 compute-0 ansible-async_wrapper.py[79453]: Start module (79453)
Jan 23 18:34:47 compute-0 ansible-async_wrapper.py[79370]: Return async_wrapper task started.
Jan 23 18:34:47 compute-0 sudo[79362]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:47 compute-0 sudo[79447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/etc/ceph/ceph.client.admin.keyring.new
Jan 23 18:34:47 compute-0 sudo[79447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:47 compute-0 sudo[79447]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:47 compute-0 sudo[79476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 23 18:34:47 compute-0 sudo[79476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:47 compute-0 sudo[79476]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:47 compute-0 python3[79456]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:34:47 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config/ceph.client.admin.keyring
Jan 23 18:34:47 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config/ceph.client.admin.keyring
Jan 23 18:34:47 compute-0 sudo[79502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config
Jan 23 18:34:47 compute-0 sudo[79502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:47 compute-0 sudo[79502]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:47 compute-0 sudo[79538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config
Jan 23 18:34:47 compute-0 sudo[79538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:47 compute-0 sudo[79538]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:47 compute-0 podman[79501]: 2026-01-23 18:34:47.82011281 +0000 UTC m=+0.024329237 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:47 compute-0 sudo[79563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config/ceph.client.admin.keyring.new
Jan 23 18:34:47 compute-0 sudo[79563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:47 compute-0 sudo[79563]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:47 compute-0 sudo[79588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:47 compute-0 sudo[79588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:48 compute-0 sudo[79588]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:48 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:34:48 compute-0 sudo[79613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config/ceph.client.admin.keyring.new
Jan 23 18:34:48 compute-0 sudo[79613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:48 compute-0 sudo[79613]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:48 compute-0 sudo[79661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config/ceph.client.admin.keyring.new
Jan 23 18:34:48 compute-0 sudo[79661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:48 compute-0 sudo[79661]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:48 compute-0 sudo[79686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config/ceph.client.admin.keyring.new
Jan 23 18:34:48 compute-0 sudo[79686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:48 compute-0 sudo[79686]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:48 compute-0 sudo[79711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-4da2adac-d096-5ead-9ec5-3108259ba9e4/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config/ceph.client.admin.keyring.new /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config/ceph.client.admin.keyring
Jan 23 18:34:48 compute-0 sudo[79711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:48 compute-0 sudo[79711]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:48 compute-0 podman[79501]: 2026-01-23 18:34:48.344217409 +0000 UTC m=+0.548433816 container create 55381b863dd0f28859205b74fef715808a916f3e1b4008116925baa416e614db (image=quay.io/ceph/ceph:v20, name=confident_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 23 18:34:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:34:48 compute-0 ceph-mon[75097]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 23 18:34:48 compute-0 ceph-mon[75097]: Updating compute-0:/var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/config/ceph.client.admin.keyring
Jan 23 18:34:48 compute-0 systemd[1]: Started libpod-conmon-55381b863dd0f28859205b74fef715808a916f3e1b4008116925baa416e614db.scope.
Jan 23 18:34:48 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d885ab626e2c900d0405708d859448fd037c759e2fde58cbcfc01033eb02b4c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d885ab626e2c900d0405708d859448fd037c759e2fde58cbcfc01033eb02b4c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:48 compute-0 ceph-mgr[75390]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 23 18:34:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:34:48 compute-0 ceph-mon[75097]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 23 18:34:48 compute-0 sudo[79787]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcdnspdnflnmrmqjoqmsntpwongbuhpt ; /usr/bin/python3'
Jan 23 18:34:48 compute-0 sudo[79787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:34:49 compute-0 podman[79501]: 2026-01-23 18:34:49.026910586 +0000 UTC m=+1.231127003 container init 55381b863dd0f28859205b74fef715808a916f3e1b4008116925baa416e614db (image=quay.io/ceph/ceph:v20, name=confident_panini, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 23 18:34:49 compute-0 podman[79501]: 2026-01-23 18:34:49.036256808 +0000 UTC m=+1.240473235 container start 55381b863dd0f28859205b74fef715808a916f3e1b4008116925baa416e614db (image=quay.io/ceph/ceph:v20, name=confident_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:49 compute-0 python3[79789]: ansible-ansible.legacy.async_status Invoked with jid=j261236930165.79370 mode=status _async_dir=/root/.ansible_async
Jan 23 18:34:49 compute-0 sudo[79787]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:49 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:49 compute-0 podman[79501]: 2026-01-23 18:34:49.159343903 +0000 UTC m=+1.363560310 container attach 55381b863dd0f28859205b74fef715808a916f3e1b4008116925baa416e614db (image=quay.io/ceph/ceph:v20, name=confident_panini, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:34:49 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:34:49 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:49 compute-0 ceph-mgr[75390]: [progress INFO root] update: starting ev e5f2854c-6064-492d-b1a7-9eb8ec984ad0 (Updating crash deployment (+1 -> 1))
Jan 23 18:34:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 23 18:34:49 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 23 18:34:49 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 23 18:34:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:34:49 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:34:49 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 23 18:34:49 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 23 18:34:49 compute-0 sudo[79810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:34:49 compute-0 sudo[79810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:49 compute-0 sudo[79810]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:49 compute-0 sudo[79835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:49 compute-0 sudo[79835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:49 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 18:34:49 compute-0 confident_panini[79738]: 
Jan 23 18:34:49 compute-0 confident_panini[79738]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 23 18:34:49 compute-0 systemd[1]: libpod-55381b863dd0f28859205b74fef715808a916f3e1b4008116925baa416e614db.scope: Deactivated successfully.
Jan 23 18:34:49 compute-0 podman[79501]: 2026-01-23 18:34:49.472996842 +0000 UTC m=+1.677213289 container died 55381b863dd0f28859205b74fef715808a916f3e1b4008116925baa416e614db (image=quay.io/ceph/ceph:v20, name=confident_panini, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d885ab626e2c900d0405708d859448fd037c759e2fde58cbcfc01033eb02b4c-merged.mount: Deactivated successfully.
Jan 23 18:34:49 compute-0 podman[79501]: 2026-01-23 18:34:49.532047782 +0000 UTC m=+1.736264189 container remove 55381b863dd0f28859205b74fef715808a916f3e1b4008116925baa416e614db (image=quay.io/ceph/ceph:v20, name=confident_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:34:49 compute-0 systemd[1]: libpod-conmon-55381b863dd0f28859205b74fef715808a916f3e1b4008116925baa416e614db.scope: Deactivated successfully.
Jan 23 18:34:49 compute-0 ansible-async_wrapper.py[79453]: Module complete (79453)
Jan 23 18:34:49 compute-0 ceph-mon[75097]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:34:49 compute-0 ceph-mon[75097]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 23 18:34:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 23 18:34:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 23 18:34:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:34:49 compute-0 podman[79914]: 2026-01-23 18:34:49.798696991 +0000 UTC m=+0.054198051 container create 7aabe3a712c7e2ea16b80c308316e30371fac9ab0f253b8e874823cf032e18ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_leakey, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 23 18:34:49 compute-0 systemd[1]: Started libpod-conmon-7aabe3a712c7e2ea16b80c308316e30371fac9ab0f253b8e874823cf032e18ac.scope.
Jan 23 18:34:49 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:49 compute-0 podman[79914]: 2026-01-23 18:34:49.770677773 +0000 UTC m=+0.026178903 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:34:49 compute-0 podman[79914]: 2026-01-23 18:34:49.876689093 +0000 UTC m=+0.132190123 container init 7aabe3a712c7e2ea16b80c308316e30371fac9ab0f253b8e874823cf032e18ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_leakey, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 23 18:34:49 compute-0 podman[79914]: 2026-01-23 18:34:49.885379489 +0000 UTC m=+0.140880569 container start 7aabe3a712c7e2ea16b80c308316e30371fac9ab0f253b8e874823cf032e18ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_leakey, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 23 18:34:49 compute-0 determined_leakey[79931]: 167 167
Jan 23 18:34:49 compute-0 systemd[1]: libpod-7aabe3a712c7e2ea16b80c308316e30371fac9ab0f253b8e874823cf032e18ac.scope: Deactivated successfully.
Jan 23 18:34:49 compute-0 podman[79914]: 2026-01-23 18:34:49.890020314 +0000 UTC m=+0.145521374 container attach 7aabe3a712c7e2ea16b80c308316e30371fac9ab0f253b8e874823cf032e18ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 23 18:34:49 compute-0 podman[79914]: 2026-01-23 18:34:49.890311171 +0000 UTC m=+0.145812211 container died 7aabe3a712c7e2ea16b80c308316e30371fac9ab0f253b8e874823cf032e18ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 18:34:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d9709e27c539def66972e86d4332f8aeadfcb06270bb3af687022952ba95402-merged.mount: Deactivated successfully.
Jan 23 18:34:49 compute-0 podman[79914]: 2026-01-23 18:34:49.927933718 +0000 UTC m=+0.183434768 container remove 7aabe3a712c7e2ea16b80c308316e30371fac9ab0f253b8e874823cf032e18ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_leakey, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:49 compute-0 systemd[1]: libpod-conmon-7aabe3a712c7e2ea16b80c308316e30371fac9ab0f253b8e874823cf032e18ac.scope: Deactivated successfully.
Jan 23 18:34:49 compute-0 systemd[1]: Reloading.
Jan 23 18:34:50 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:34:50 compute-0 systemd-rc-local-generator[79976]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:34:50 compute-0 systemd-sysv-generator[79979]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:34:50 compute-0 systemd[1]: Reloading.
Jan 23 18:34:50 compute-0 systemd-sysv-generator[80064]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:34:50 compute-0 systemd-rc-local-generator[80059]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:34:50 compute-0 sudo[80035]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtuqtskqclvfeplrahusajhrhyzuetch ; /usr/bin/python3'
Jan 23 18:34:50 compute-0 sudo[80035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:34:50 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 4da2adac-d096-5ead-9ec5-3108259ba9e4...
Jan 23 18:34:50 compute-0 ceph-mon[75097]: Deploying daemon crash.compute-0 on compute-0
Jan 23 18:34:50 compute-0 ceph-mon[75097]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 18:34:50 compute-0 python3[80074]: ansible-ansible.legacy.async_status Invoked with jid=j261236930165.79370 mode=status _async_dir=/root/.ansible_async
Jan 23 18:34:50 compute-0 sudo[80035]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:50 compute-0 podman[80146]: 2026-01-23 18:34:50.823932065 +0000 UTC m=+0.047938894 container create 90dd94238ca73079035ddf2040af56104dd0e490b899269406a5ffd5d4043851 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-crash-compute-0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:50 compute-0 sudo[80182]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lshzhlecrmvnnzzoqiqhgzpgmagcalcu ; /usr/bin/python3'
Jan 23 18:34:50 compute-0 sudo[80182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:34:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583b264a18ba0d3c86d7c257eab344a1b6fa3e154ff94ef75df708f8de23638f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583b264a18ba0d3c86d7c257eab344a1b6fa3e154ff94ef75df708f8de23638f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583b264a18ba0d3c86d7c257eab344a1b6fa3e154ff94ef75df708f8de23638f/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583b264a18ba0d3c86d7c257eab344a1b6fa3e154ff94ef75df708f8de23638f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:50 compute-0 podman[80146]: 2026-01-23 18:34:50.804718217 +0000 UTC m=+0.028725076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:34:50 compute-0 python3[80184]: ansible-ansible.legacy.async_status Invoked with jid=j261236930165.79370 mode=cleanup _async_dir=/root/.ansible_async
Jan 23 18:34:50 compute-0 sudo[80182]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:51 compute-0 podman[80146]: 2026-01-23 18:34:51.52620498 +0000 UTC m=+0.750211829 container init 90dd94238ca73079035ddf2040af56104dd0e490b899269406a5ffd5d4043851 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:34:51 compute-0 podman[80146]: 2026-01-23 18:34:51.533417499 +0000 UTC m=+0.757424328 container start 90dd94238ca73079035ddf2040af56104dd0e490b899269406a5ffd5d4043851 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 23 18:34:51 compute-0 bash[80146]: 90dd94238ca73079035ddf2040af56104dd0e490b899269406a5ffd5d4043851
Jan 23 18:34:51 compute-0 systemd[1]: Started Ceph crash.compute-0 for 4da2adac-d096-5ead-9ec5-3108259ba9e4.
Jan 23 18:34:51 compute-0 sudo[80213]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqjsmhrgaqgewudxfalnokzqnksjdtjl ; /usr/bin/python3'
Jan 23 18:34:51 compute-0 sudo[80213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:34:51 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-crash-compute-0[80187]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 23 18:34:51 compute-0 sudo[79835]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:34:51 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:34:51 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 23 18:34:51 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:51 compute-0 ceph-mgr[75390]: [progress INFO root] complete: finished ev e5f2854c-6064-492d-b1a7-9eb8ec984ad0 (Updating crash deployment (+1 -> 1))
Jan 23 18:34:51 compute-0 ceph-mgr[75390]: [progress INFO root] Completed event e5f2854c-6064-492d-b1a7-9eb8ec984ad0 (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 23 18:34:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 23 18:34:51 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 23 18:34:51 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:51 compute-0 ceph-mgr[75390]: [progress INFO root] update: starting ev 9b694e64-3080-47d0-8322-08904d1d075e (Updating mgr deployment (+1 -> 2))
Jan 23 18:34:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.hkhaml", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 23 18:34:51 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.hkhaml", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 23 18:34:51 compute-0 ceph-mon[75097]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:34:51 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:51 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:51 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:51 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:51 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hkhaml", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 23 18:34:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 23 18:34:51 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mgr services"} : dispatch
Jan 23 18:34:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:34:51 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:34:51 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.hkhaml on compute-0
Jan 23 18:34:51 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.hkhaml on compute-0
Jan 23 18:34:51 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-crash-compute-0[80187]: 2026-01-23T18:34:51.699+0000 7f65b7160640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 23 18:34:51 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-crash-compute-0[80187]: 2026-01-23T18:34:51.699+0000 7f65b7160640 -1 AuthRegistry(0x7f65b0052930) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 23 18:34:51 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-crash-compute-0[80187]: 2026-01-23T18:34:51.700+0000 7f65b7160640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 23 18:34:51 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-crash-compute-0[80187]: 2026-01-23T18:34:51.700+0000 7f65b7160640 -1 AuthRegistry(0x7f65b715efe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 23 18:34:51 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-crash-compute-0[80187]: 2026-01-23T18:34:51.701+0000 7f65b4ed5640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 23 18:34:51 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-crash-compute-0[80187]: 2026-01-23T18:34:51.701+0000 7f65b7160640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 23 18:34:51 compute-0 sudo[80220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:34:51 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-crash-compute-0[80187]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 23 18:34:51 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-crash-compute-0[80187]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 23 18:34:51 compute-0 sudo[80220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:51 compute-0 sudo[80220]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:51 compute-0 python3[80217]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 18:34:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:34:51 compute-0 sudo[80213]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:51 compute-0 sudo[80255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:51 compute-0 sudo[80255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:52 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:34:52 compute-0 sudo[80321]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwozreajnzwqywweknudefpizzffavck ; /usr/bin/python3'
Jan 23 18:34:52 compute-0 sudo[80321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:34:52 compute-0 podman[80348]: 2026-01-23 18:34:52.172664034 +0000 UTC m=+0.044325394 container create 30536c2ac5d61596d54a890612347fbaef6d7b4168d0158732cbe847f02c4e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:52 compute-0 systemd[1]: Started libpod-conmon-30536c2ac5d61596d54a890612347fbaef6d7b4168d0158732cbe847f02c4e43.scope.
Jan 23 18:34:52 compute-0 podman[80348]: 2026-01-23 18:34:52.150121823 +0000 UTC m=+0.021783163 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:34:52 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:52 compute-0 python3[80333]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:34:52 compute-0 podman[80348]: 2026-01-23 18:34:52.263145847 +0000 UTC m=+0.134807177 container init 30536c2ac5d61596d54a890612347fbaef6d7b4168d0158732cbe847f02c4e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_wilbur, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 18:34:52 compute-0 podman[80348]: 2026-01-23 18:34:52.272207923 +0000 UTC m=+0.143869253 container start 30536c2ac5d61596d54a890612347fbaef6d7b4168d0158732cbe847f02c4e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:52 compute-0 podman[80348]: 2026-01-23 18:34:52.276285194 +0000 UTC m=+0.147946524 container attach 30536c2ac5d61596d54a890612347fbaef6d7b4168d0158732cbe847f02c4e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_wilbur, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 18:34:52 compute-0 vibrant_wilbur[80364]: 167 167
Jan 23 18:34:52 compute-0 systemd[1]: libpod-30536c2ac5d61596d54a890612347fbaef6d7b4168d0158732cbe847f02c4e43.scope: Deactivated successfully.
Jan 23 18:34:52 compute-0 conmon[80364]: conmon 30536c2ac5d61596d54a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-30536c2ac5d61596d54a890612347fbaef6d7b4168d0158732cbe847f02c4e43.scope/container/memory.events
Jan 23 18:34:52 compute-0 podman[80348]: 2026-01-23 18:34:52.279364132 +0000 UTC m=+0.151025482 container died 30536c2ac5d61596d54a890612347fbaef6d7b4168d0158732cbe847f02c4e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_wilbur, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 23 18:34:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7750577fee300294997c5086ae5d87edf3a9ac06b2d7f7cb9ba07605989f79cd-merged.mount: Deactivated successfully.
Jan 23 18:34:52 compute-0 podman[80367]: 2026-01-23 18:34:52.326912935 +0000 UTC m=+0.059196355 container create 4d39e77a6828ed2dcaf81b4552a875364c61596ca0618ec373d918c95165208d (image=quay.io/ceph/ceph:v20, name=hungry_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 23 18:34:52 compute-0 podman[80348]: 2026-01-23 18:34:52.348620555 +0000 UTC m=+0.220281875 container remove 30536c2ac5d61596d54a890612347fbaef6d7b4168d0158732cbe847f02c4e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_wilbur, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:34:52 compute-0 systemd[1]: libpod-conmon-30536c2ac5d61596d54a890612347fbaef6d7b4168d0158732cbe847f02c4e43.scope: Deactivated successfully.
Jan 23 18:34:52 compute-0 systemd[1]: Started libpod-conmon-4d39e77a6828ed2dcaf81b4552a875364c61596ca0618ec373d918c95165208d.scope.
Jan 23 18:34:52 compute-0 podman[80367]: 2026-01-23 18:34:52.296808615 +0000 UTC m=+0.029092065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:52 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:52 compute-0 systemd[1]: Reloading.
Jan 23 18:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c201b895f6282a3f50d4fe219cb78283daadbf1a56d465e0992deb6672be5216/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c201b895f6282a3f50d4fe219cb78283daadbf1a56d465e0992deb6672be5216/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c201b895f6282a3f50d4fe219cb78283daadbf1a56d465e0992deb6672be5216/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:52 compute-0 podman[80367]: 2026-01-23 18:34:52.430534355 +0000 UTC m=+0.162817855 container init 4d39e77a6828ed2dcaf81b4552a875364c61596ca0618ec373d918c95165208d (image=quay.io/ceph/ceph:v20, name=hungry_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:52 compute-0 podman[80367]: 2026-01-23 18:34:52.438481352 +0000 UTC m=+0.170764792 container start 4d39e77a6828ed2dcaf81b4552a875364c61596ca0618ec373d918c95165208d (image=quay.io/ceph/ceph:v20, name=hungry_mirzakhani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:52 compute-0 podman[80367]: 2026-01-23 18:34:52.445485467 +0000 UTC m=+0.177768917 container attach 4d39e77a6828ed2dcaf81b4552a875364c61596ca0618ec373d918c95165208d (image=quay.io/ceph/ceph:v20, name=hungry_mirzakhani, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:52 compute-0 systemd-sysv-generator[80429]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:34:52 compute-0 systemd-rc-local-generator[80426]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:34:52 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:52 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.hkhaml", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 23 18:34:52 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hkhaml", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 23 18:34:52 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mgr services"} : dispatch
Jan 23 18:34:52 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:34:52 compute-0 ceph-mon[75097]: Deploying daemon mgr.compute-0.hkhaml on compute-0
Jan 23 18:34:52 compute-0 ansible-async_wrapper.py[79450]: Done in kid B.
Jan 23 18:34:52 compute-0 systemd[1]: Reloading.
Jan 23 18:34:52 compute-0 systemd-sysv-generator[80487]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:34:52 compute-0 systemd-rc-local-generator[80484]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:34:52 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 18:34:52 compute-0 hungry_mirzakhani[80395]: 
Jan 23 18:34:52 compute-0 hungry_mirzakhani[80395]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 23 18:34:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:34:52 compute-0 podman[80494]: 2026-01-23 18:34:52.918520564 +0000 UTC m=+0.029733551 container died 4d39e77a6828ed2dcaf81b4552a875364c61596ca0618ec373d918c95165208d (image=quay.io/ceph/ceph:v20, name=hungry_mirzakhani, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 18:34:52 compute-0 systemd[1]: libpod-4d39e77a6828ed2dcaf81b4552a875364c61596ca0618ec373d918c95165208d.scope: Deactivated successfully.
Jan 23 18:34:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c201b895f6282a3f50d4fe219cb78283daadbf1a56d465e0992deb6672be5216-merged.mount: Deactivated successfully.
Jan 23 18:34:52 compute-0 systemd[1]: Starting Ceph mgr.compute-0.hkhaml for 4da2adac-d096-5ead-9ec5-3108259ba9e4...
Jan 23 18:34:52 compute-0 podman[80494]: 2026-01-23 18:34:52.997537562 +0000 UTC m=+0.108750529 container remove 4d39e77a6828ed2dcaf81b4552a875364c61596ca0618ec373d918c95165208d (image=quay.io/ceph/ceph:v20, name=hungry_mirzakhani, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:53 compute-0 systemd[1]: libpod-conmon-4d39e77a6828ed2dcaf81b4552a875364c61596ca0618ec373d918c95165208d.scope: Deactivated successfully.
Jan 23 18:34:53 compute-0 sudo[80321]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:53 compute-0 podman[80559]: 2026-01-23 18:34:53.157187907 +0000 UTC m=+0.019186539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:34:53 compute-0 sudo[80595]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsndbcnzgheaqucrzfzceozsgibdasnt ; /usr/bin/python3'
Jan 23 18:34:53 compute-0 sudo[80595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:34:53 compute-0 python3[80597]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:34:53 compute-0 podman[80559]: 2026-01-23 18:34:53.573894151 +0000 UTC m=+0.435892793 container create 568b223c0dbbf59be381088fe76cb1eb2d494be57db448d55fbcafb69df32ac9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-hkhaml, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:34:53 compute-0 podman[80598]: 2026-01-23 18:34:53.615142668 +0000 UTC m=+0.068256970 container create d84a9c28722735ae1aafb8d2459ecc42e91d7ddaed4586c4aa581c137390a5b5 (image=quay.io/ceph/ceph:v20, name=ecstatic_goldwasser, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 18:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3038d510ce785bb077123825e6ea3119a0092354b142b78982f56762b2c11b47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3038d510ce785bb077123825e6ea3119a0092354b142b78982f56762b2c11b47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3038d510ce785bb077123825e6ea3119a0092354b142b78982f56762b2c11b47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3038d510ce785bb077123825e6ea3119a0092354b142b78982f56762b2c11b47/merged/var/lib/ceph/mgr/ceph-compute-0.hkhaml supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:53 compute-0 systemd[1]: Started libpod-conmon-d84a9c28722735ae1aafb8d2459ecc42e91d7ddaed4586c4aa581c137390a5b5.scope.
Jan 23 18:34:53 compute-0 podman[80559]: 2026-01-23 18:34:53.648748895 +0000 UTC m=+0.510747547 container init 568b223c0dbbf59be381088fe76cb1eb2d494be57db448d55fbcafb69df32ac9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-hkhaml, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 23 18:34:53 compute-0 podman[80559]: 2026-01-23 18:34:53.657547934 +0000 UTC m=+0.519546556 container start 568b223c0dbbf59be381088fe76cb1eb2d494be57db448d55fbcafb69df32ac9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-hkhaml, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:34:53 compute-0 ceph-mon[75097]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 18:34:53 compute-0 ceph-mon[75097]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:34:53 compute-0 bash[80559]: 568b223c0dbbf59be381088fe76cb1eb2d494be57db448d55fbcafb69df32ac9
Jan 23 18:34:53 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df9778b92013773226197744d556ae6d721eccc9fd56d0220d87c5f825e1915e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df9778b92013773226197744d556ae6d721eccc9fd56d0220d87c5f825e1915e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df9778b92013773226197744d556ae6d721eccc9fd56d0220d87c5f825e1915e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:53 compute-0 systemd[1]: Started Ceph mgr.compute-0.hkhaml for 4da2adac-d096-5ead-9ec5-3108259ba9e4.
Jan 23 18:34:53 compute-0 podman[80598]: 2026-01-23 18:34:53.589957591 +0000 UTC m=+0.043071923 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:53 compute-0 podman[80598]: 2026-01-23 18:34:53.684783751 +0000 UTC m=+0.137898083 container init d84a9c28722735ae1aafb8d2459ecc42e91d7ddaed4586c4aa581c137390a5b5 (image=quay.io/ceph/ceph:v20, name=ecstatic_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Jan 23 18:34:53 compute-0 podman[80598]: 2026-01-23 18:34:53.692801891 +0000 UTC m=+0.145916183 container start d84a9c28722735ae1aafb8d2459ecc42e91d7ddaed4586c4aa581c137390a5b5 (image=quay.io/ceph/ceph:v20, name=ecstatic_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 18:34:53 compute-0 podman[80598]: 2026-01-23 18:34:53.696453383 +0000 UTC m=+0.149567675 container attach d84a9c28722735ae1aafb8d2459ecc42e91d7ddaed4586c4aa581c137390a5b5 (image=quay.io/ceph/ceph:v20, name=ecstatic_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 23 18:34:53 compute-0 ceph-mgr[80623]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 18:34:53 compute-0 ceph-mgr[80623]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 23 18:34:53 compute-0 ceph-mgr[80623]: pidfile_write: ignore empty --pid-file
Jan 23 18:34:53 compute-0 sudo[80255]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:34:53 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:34:53 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 23 18:34:53 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:53 compute-0 ceph-mgr[75390]: [progress INFO root] complete: finished ev 9b694e64-3080-47d0-8322-08904d1d075e (Updating mgr deployment (+1 -> 2))
Jan 23 18:34:53 compute-0 ceph-mgr[75390]: [progress INFO root] Completed event 9b694e64-3080-47d0-8322-08904d1d075e (Updating mgr deployment (+1 -> 2)) in 2 seconds
Jan 23 18:34:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 23 18:34:53 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:53 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'alerts'
Jan 23 18:34:53 compute-0 sudo[80645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:34:53 compute-0 sudo[80645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:53 compute-0 sudo[80645]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:53 compute-0 sudo[80677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:34:53 compute-0 sudo[80677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:53 compute-0 sudo[80677]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:53 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'balancer'
Jan 23 18:34:53 compute-0 sudo[80714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 23 18:34:53 compute-0 sudo[80714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:53 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'cephadm'
Jan 23 18:34:54 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:34:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 23 18:34:54 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2397186180' entity='client.admin' 
Jan 23 18:34:54 compute-0 systemd[1]: libpod-d84a9c28722735ae1aafb8d2459ecc42e91d7ddaed4586c4aa581c137390a5b5.scope: Deactivated successfully.
Jan 23 18:34:54 compute-0 podman[80598]: 2026-01-23 18:34:54.134830847 +0000 UTC m=+0.587945139 container died d84a9c28722735ae1aafb8d2459ecc42e91d7ddaed4586c4aa581c137390a5b5 (image=quay.io/ceph/ceph:v20, name=ecstatic_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 18:34:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-df9778b92013773226197744d556ae6d721eccc9fd56d0220d87c5f825e1915e-merged.mount: Deactivated successfully.
Jan 23 18:34:54 compute-0 podman[80598]: 2026-01-23 18:34:54.174969745 +0000 UTC m=+0.628084037 container remove d84a9c28722735ae1aafb8d2459ecc42e91d7ddaed4586c4aa581c137390a5b5 (image=quay.io/ceph/ceph:v20, name=ecstatic_goldwasser, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:54 compute-0 systemd[1]: libpod-conmon-d84a9c28722735ae1aafb8d2459ecc42e91d7ddaed4586c4aa581c137390a5b5.scope: Deactivated successfully.
Jan 23 18:34:54 compute-0 sudo[80595]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:54 compute-0 podman[80799]: 2026-01-23 18:34:54.321864783 +0000 UTC m=+0.064468486 container exec 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:54 compute-0 sudo[80843]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkyjqzqghllunrghcjpzsjvwritstzow ; /usr/bin/python3'
Jan 23 18:34:54 compute-0 sudo[80843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:34:54 compute-0 podman[80799]: 2026-01-23 18:34:54.422964261 +0000 UTC m=+0.165567964 container exec_died 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 18:34:54 compute-0 python3[80848]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:34:54 compute-0 podman[80895]: 2026-01-23 18:34:54.573905658 +0000 UTC m=+0.038023848 container create eb71afa04b73b03a5b585a611b1f6bbbcb5909af07d152293f6339331e8ab072 (image=quay.io/ceph/ceph:v20, name=nice_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:34:54 compute-0 systemd[1]: Started libpod-conmon-eb71afa04b73b03a5b585a611b1f6bbbcb5909af07d152293f6339331e8ab072.scope.
Jan 23 18:34:54 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248258dbb1997a026470dc0efe83b66619812703df07b34172b53f9b9f1d46f6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248258dbb1997a026470dc0efe83b66619812703df07b34172b53f9b9f1d46f6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248258dbb1997a026470dc0efe83b66619812703df07b34172b53f9b9f1d46f6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:54 compute-0 podman[80895]: 2026-01-23 18:34:54.555694404 +0000 UTC m=+0.019812604 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:54 compute-0 podman[80895]: 2026-01-23 18:34:54.668640727 +0000 UTC m=+0.132758937 container init eb71afa04b73b03a5b585a611b1f6bbbcb5909af07d152293f6339331e8ab072 (image=quay.io/ceph/ceph:v20, name=nice_solomon, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:54 compute-0 podman[80895]: 2026-01-23 18:34:54.675859316 +0000 UTC m=+0.139977506 container start eb71afa04b73b03a5b585a611b1f6bbbcb5909af07d152293f6339331e8ab072 (image=quay.io/ceph/ceph:v20, name=nice_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:34:54 compute-0 podman[80895]: 2026-01-23 18:34:54.680414549 +0000 UTC m=+0.144532769 container attach eb71afa04b73b03a5b585a611b1f6bbbcb5909af07d152293f6339331e8ab072 (image=quay.io/ceph/ceph:v20, name=nice_solomon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:54 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:54 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:54 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:54 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:54 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2397186180' entity='client.admin' 
Jan 23 18:34:54 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'crash'
Jan 23 18:34:54 compute-0 sudo[80714]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:34:54 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:34:54 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:34:54 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:34:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:34:54 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:34:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:34:54 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:34:54 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'dashboard'
Jan 23 18:34:54 compute-0 sudo[80986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:34:54 compute-0 sudo[80986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:54 compute-0 sudo[80986]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:54 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 23 18:34:54 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 23 18:34:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 23 18:34:54 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 23 18:34:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 23 18:34:54 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 23 18:34:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:34:54 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:34:54 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 23 18:34:54 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 23 18:34:54 compute-0 sudo[81011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:34:54 compute-0 sudo[81011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:54 compute-0 sudo[81011]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:55 compute-0 ceph-mgr[75390]: [progress INFO root] Writing back 2 completed events
Jan 23 18:34:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 18:34:55 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:55 compute-0 sudo[81036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:55 compute-0 sudo[81036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 23 18:34:55 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3572804444' entity='client.admin' 
Jan 23 18:34:55 compute-0 systemd[1]: libpod-eb71afa04b73b03a5b585a611b1f6bbbcb5909af07d152293f6339331e8ab072.scope: Deactivated successfully.
Jan 23 18:34:55 compute-0 podman[80895]: 2026-01-23 18:34:55.144054873 +0000 UTC m=+0.608173053 container died eb71afa04b73b03a5b585a611b1f6bbbcb5909af07d152293f6339331e8ab072 (image=quay.io/ceph/ceph:v20, name=nice_solomon, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 23 18:34:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-248258dbb1997a026470dc0efe83b66619812703df07b34172b53f9b9f1d46f6-merged.mount: Deactivated successfully.
Jan 23 18:34:55 compute-0 podman[80895]: 2026-01-23 18:34:55.185381972 +0000 UTC m=+0.649500152 container remove eb71afa04b73b03a5b585a611b1f6bbbcb5909af07d152293f6339331e8ab072 (image=quay.io/ceph/ceph:v20, name=nice_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 18:34:55 compute-0 systemd[1]: libpod-conmon-eb71afa04b73b03a5b585a611b1f6bbbcb5909af07d152293f6339331e8ab072.scope: Deactivated successfully.
Jan 23 18:34:55 compute-0 sudo[80843]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:55 compute-0 podman[81088]: 2026-01-23 18:34:55.359677862 +0000 UTC m=+0.046645963 container create 79b9f39462dc295a90170ae6ef8f29a0c4a956e8534c49fa4c944ead3e45ed8d (image=quay.io/ceph/ceph:v20, name=sharp_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:55 compute-0 systemd[1]: Started libpod-conmon-79b9f39462dc295a90170ae6ef8f29a0c4a956e8534c49fa4c944ead3e45ed8d.scope.
Jan 23 18:34:55 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:55 compute-0 sudo[81127]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deymjbnfzesyrtldcdaufndkxmxajiir ; /usr/bin/python3'
Jan 23 18:34:55 compute-0 sudo[81127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:34:55 compute-0 podman[81088]: 2026-01-23 18:34:55.426097755 +0000 UTC m=+0.113065866 container init 79b9f39462dc295a90170ae6ef8f29a0c4a956e8534c49fa4c944ead3e45ed8d (image=quay.io/ceph/ceph:v20, name=sharp_bouman, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:55 compute-0 podman[81088]: 2026-01-23 18:34:55.332656738 +0000 UTC m=+0.019624869 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:55 compute-0 podman[81088]: 2026-01-23 18:34:55.432812772 +0000 UTC m=+0.119780873 container start 79b9f39462dc295a90170ae6ef8f29a0c4a956e8534c49fa4c944ead3e45ed8d (image=quay.io/ceph/ceph:v20, name=sharp_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 18:34:55 compute-0 sharp_bouman[81128]: 167 167
Jan 23 18:34:55 compute-0 podman[81088]: 2026-01-23 18:34:55.437302104 +0000 UTC m=+0.124270205 container attach 79b9f39462dc295a90170ae6ef8f29a0c4a956e8534c49fa4c944ead3e45ed8d (image=quay.io/ceph/ceph:v20, name=sharp_bouman, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:55 compute-0 systemd[1]: libpod-79b9f39462dc295a90170ae6ef8f29a0c4a956e8534c49fa4c944ead3e45ed8d.scope: Deactivated successfully.
Jan 23 18:34:55 compute-0 podman[81088]: 2026-01-23 18:34:55.43792738 +0000 UTC m=+0.124895481 container died 79b9f39462dc295a90170ae6ef8f29a0c4a956e8534c49fa4c944ead3e45ed8d (image=quay.io/ceph/ceph:v20, name=sharp_bouman, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 23 18:34:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-28fae46b0c57240090ed259a088de0eaf196e3f32e6efb88a8bd904111c2873a-merged.mount: Deactivated successfully.
Jan 23 18:34:55 compute-0 podman[81088]: 2026-01-23 18:34:55.47851149 +0000 UTC m=+0.165479591 container remove 79b9f39462dc295a90170ae6ef8f29a0c4a956e8534c49fa4c944ead3e45ed8d (image=quay.io/ceph/ceph:v20, name=sharp_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:34:55 compute-0 systemd[1]: libpod-conmon-79b9f39462dc295a90170ae6ef8f29a0c4a956e8534c49fa4c944ead3e45ed8d.scope: Deactivated successfully.
Jan 23 18:34:55 compute-0 sudo[81036]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:34:55 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:34:55 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:55 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.rszfwe (unknown last config time)...
Jan 23 18:34:55 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.rszfwe (unknown last config time)...
Jan 23 18:34:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.rszfwe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 23 18:34:55 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.rszfwe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 23 18:34:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 23 18:34:55 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mgr services"} : dispatch
Jan 23 18:34:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:34:55 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:34:55 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.rszfwe on compute-0
Jan 23 18:34:55 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.rszfwe on compute-0
Jan 23 18:34:55 compute-0 python3[81132]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:34:55 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'devicehealth'
Jan 23 18:34:55 compute-0 sudo[81147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:34:55 compute-0 sudo[81147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:55 compute-0 sudo[81147]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:55 compute-0 podman[81153]: 2026-01-23 18:34:55.626064094 +0000 UTC m=+0.046661843 container create c4fd7582ec432ba4b715c8c27cfdb8ed160afa67485588193e3ba8ca8b8eca64 (image=quay.io/ceph/ceph:v20, name=charming_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:34:55 compute-0 systemd[1]: Started libpod-conmon-c4fd7582ec432ba4b715c8c27cfdb8ed160afa67485588193e3ba8ca8b8eca64.scope.
Jan 23 18:34:55 compute-0 sudo[81185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:34:55 compute-0 sudo[81185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:55 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'diskprediction_local'
Jan 23 18:34:55 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb69642c80fc8b755d216200cb9c215c01baedca1e098d824da089bfc98397fc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb69642c80fc8b755d216200cb9c215c01baedca1e098d824da089bfc98397fc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb69642c80fc8b755d216200cb9c215c01baedca1e098d824da089bfc98397fc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:55 compute-0 podman[81153]: 2026-01-23 18:34:55.600001454 +0000 UTC m=+0.020599233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:55 compute-0 podman[81153]: 2026-01-23 18:34:55.706505066 +0000 UTC m=+0.127102835 container init c4fd7582ec432ba4b715c8c27cfdb8ed160afa67485588193e3ba8ca8b8eca64 (image=quay.io/ceph/ceph:v20, name=charming_faraday, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:55 compute-0 podman[81153]: 2026-01-23 18:34:55.725719695 +0000 UTC m=+0.146317444 container start c4fd7582ec432ba4b715c8c27cfdb8ed160afa67485588193e3ba8ca8b8eca64 (image=quay.io/ceph/ceph:v20, name=charming_faraday, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 18:34:55 compute-0 podman[81153]: 2026-01-23 18:34:55.736629296 +0000 UTC m=+0.157227075 container attach c4fd7582ec432ba4b715c8c27cfdb8ed160afa67485588193e3ba8ca8b8eca64 (image=quay.io/ceph/ceph:v20, name=charming_faraday, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:55 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-hkhaml[80614]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 23 18:34:55 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-hkhaml[80614]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 23 18:34:55 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-hkhaml[80614]:   from numpy import show_config as show_numpy_config
Jan 23 18:34:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:34:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:34:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:55 compute-0 ceph-mon[75097]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:34:55 compute-0 ceph-mon[75097]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 23 18:34:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 23 18:34:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 23 18:34:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:34:55 compute-0 ceph-mon[75097]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 23 18:34:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:55 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3572804444' entity='client.admin' 
Jan 23 18:34:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.rszfwe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 23 18:34:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mgr services"} : dispatch
Jan 23 18:34:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:34:55 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'influx'
Jan 23 18:34:55 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'insights'
Jan 23 18:34:55 compute-0 podman[81250]: 2026-01-23 18:34:55.942674566 +0000 UTC m=+0.033390482 container create e067632e841e4076a62176932bb15a22db82deb0db46a9ba67344b17fc90d30b (image=quay.io/ceph/ceph:v20, name=stoic_torvalds, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:55 compute-0 systemd[1]: Started libpod-conmon-e067632e841e4076a62176932bb15a22db82deb0db46a9ba67344b17fc90d30b.scope.
Jan 23 18:34:55 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:55 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'iostat'
Jan 23 18:34:55 compute-0 podman[81250]: 2026-01-23 18:34:55.997715176 +0000 UTC m=+0.088431082 container init e067632e841e4076a62176932bb15a22db82deb0db46a9ba67344b17fc90d30b (image=quay.io/ceph/ceph:v20, name=stoic_torvalds, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:56 compute-0 podman[81250]: 2026-01-23 18:34:56.00667727 +0000 UTC m=+0.097393186 container start e067632e841e4076a62176932bb15a22db82deb0db46a9ba67344b17fc90d30b (image=quay.io/ceph/ceph:v20, name=stoic_torvalds, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:56 compute-0 stoic_torvalds[81264]: 167 167
Jan 23 18:34:56 compute-0 systemd[1]: libpod-e067632e841e4076a62176932bb15a22db82deb0db46a9ba67344b17fc90d30b.scope: Deactivated successfully.
Jan 23 18:34:56 compute-0 podman[81250]: 2026-01-23 18:34:56.010944386 +0000 UTC m=+0.101660322 container attach e067632e841e4076a62176932bb15a22db82deb0db46a9ba67344b17fc90d30b (image=quay.io/ceph/ceph:v20, name=stoic_torvalds, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:56 compute-0 podman[81250]: 2026-01-23 18:34:56.011381737 +0000 UTC m=+0.102097653 container died e067632e841e4076a62176932bb15a22db82deb0db46a9ba67344b17fc90d30b (image=quay.io/ceph/ceph:v20, name=stoic_torvalds, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:56 compute-0 podman[81250]: 2026-01-23 18:34:55.927906278 +0000 UTC m=+0.018622214 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:56 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:34:56 compute-0 podman[81250]: 2026-01-23 18:34:56.055767732 +0000 UTC m=+0.146483648 container remove e067632e841e4076a62176932bb15a22db82deb0db46a9ba67344b17fc90d30b (image=quay.io/ceph/ceph:v20, name=stoic_torvalds, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Jan 23 18:34:56 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'k8sevents'
Jan 23 18:34:56 compute-0 systemd[1]: libpod-conmon-e067632e841e4076a62176932bb15a22db82deb0db46a9ba67344b17fc90d30b.scope: Deactivated successfully.
Jan 23 18:34:56 compute-0 sudo[81185]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:34:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 23 18:34:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/106605048' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 23 18:34:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:34:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-630fcc667dd261e96236b1c9d68410048399bfc16fe571595b5c69aefa0fed63-merged.mount: Deactivated successfully.
Jan 23 18:34:56 compute-0 sudo[81284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:34:56 compute-0 sudo[81284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:56 compute-0 sudo[81284]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:56 compute-0 sudo[81309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 23 18:34:56 compute-0 sudo[81309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:56 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'localpool'
Jan 23 18:34:56 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'mds_autoscaler'
Jan 23 18:34:56 compute-0 podman[81378]: 2026-01-23 18:34:56.677983113 +0000 UTC m=+0.052028926 container exec 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:34:56 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'mirroring'
Jan 23 18:34:56 compute-0 podman[81378]: 2026-01-23 18:34:56.770506076 +0000 UTC m=+0.144551809 container exec_died 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:34:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:34:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 23 18:34:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 18:34:56 compute-0 ceph-mon[75097]: Reconfiguring mgr.compute-0.rszfwe (unknown last config time)...
Jan 23 18:34:56 compute-0 ceph-mon[75097]: Reconfiguring daemon mgr.compute-0.rszfwe on compute-0
Jan 23 18:34:56 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/106605048' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 23 18:34:56 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:56 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:56 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'nfs'
Jan 23 18:34:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/106605048' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 23 18:34:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 23 18:34:56 compute-0 charming_faraday[81212]: set require_min_compat_client to mimic
Jan 23 18:34:56 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 23 18:34:56 compute-0 systemd[1]: libpod-c4fd7582ec432ba4b715c8c27cfdb8ed160afa67485588193e3ba8ca8b8eca64.scope: Deactivated successfully.
Jan 23 18:34:56 compute-0 podman[81153]: 2026-01-23 18:34:56.892219926 +0000 UTC m=+1.312817675 container died c4fd7582ec432ba4b715c8c27cfdb8ed160afa67485588193e3ba8ca8b8eca64 (image=quay.io/ceph/ceph:v20, name=charming_faraday, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:34:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb69642c80fc8b755d216200cb9c215c01baedca1e098d824da089bfc98397fc-merged.mount: Deactivated successfully.
Jan 23 18:34:56 compute-0 podman[81153]: 2026-01-23 18:34:56.927983797 +0000 UTC m=+1.348581546 container remove c4fd7582ec432ba4b715c8c27cfdb8ed160afa67485588193e3ba8ca8b8eca64 (image=quay.io/ceph/ceph:v20, name=charming_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Jan 23 18:34:56 compute-0 systemd[1]: libpod-conmon-c4fd7582ec432ba4b715c8c27cfdb8ed160afa67485588193e3ba8ca8b8eca64.scope: Deactivated successfully.
Jan 23 18:34:56 compute-0 sudo[81127]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:57 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'orchestrator'
Jan 23 18:34:57 compute-0 sudo[81309]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:34:57 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:34:57 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:34:57 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:34:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:34:57 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:34:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:34:57 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:57 compute-0 sudo[81506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:34:57 compute-0 sudo[81506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:57 compute-0 sudo[81506]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:57 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'osd_perf_query'
Jan 23 18:34:57 compute-0 sudo[81554]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqmzarvtfrptxonyfpwmxmudpxfjmwiq ; /usr/bin/python3'
Jan 23 18:34:57 compute-0 sudo[81554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:34:57 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'osd_support'
Jan 23 18:34:57 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'pg_autoscaler'
Jan 23 18:34:57 compute-0 python3[81556]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:34:57 compute-0 podman[81557]: 2026-01-23 18:34:57.586872311 +0000 UTC m=+0.052583820 container create 1f3fdefad1e620cf3ce84727be23a51b8a8b5a8b2501b43e2111a36d1b6046f5 (image=quay.io/ceph/ceph:v20, name=nostalgic_montalcini, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:34:57 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'progress'
Jan 23 18:34:57 compute-0 systemd[1]: Started libpod-conmon-1f3fdefad1e620cf3ce84727be23a51b8a8b5a8b2501b43e2111a36d1b6046f5.scope.
Jan 23 18:34:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93369b903dc26d7f0b2b208ba5ec695a6b8af6bfe73d19c20353f5edc4c51aa5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93369b903dc26d7f0b2b208ba5ec695a6b8af6bfe73d19c20353f5edc4c51aa5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93369b903dc26d7f0b2b208ba5ec695a6b8af6bfe73d19c20353f5edc4c51aa5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:57 compute-0 podman[81557]: 2026-01-23 18:34:57.567451427 +0000 UTC m=+0.033162946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:34:57 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'prometheus'
Jan 23 18:34:57 compute-0 podman[81557]: 2026-01-23 18:34:57.668780801 +0000 UTC m=+0.134492360 container init 1f3fdefad1e620cf3ce84727be23a51b8a8b5a8b2501b43e2111a36d1b6046f5 (image=quay.io/ceph/ceph:v20, name=nostalgic_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 18:34:57 compute-0 podman[81557]: 2026-01-23 18:34:57.681156769 +0000 UTC m=+0.146868288 container start 1f3fdefad1e620cf3ce84727be23a51b8a8b5a8b2501b43e2111a36d1b6046f5 (image=quay.io/ceph/ceph:v20, name=nostalgic_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:34:57 compute-0 podman[81557]: 2026-01-23 18:34:57.685768764 +0000 UTC m=+0.151480283 container attach 1f3fdefad1e620cf3ce84727be23a51b8a8b5a8b2501b43e2111a36d1b6046f5 (image=quay.io/ceph/ceph:v20, name=nostalgic_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 18:34:57 compute-0 ceph-mon[75097]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:34:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/106605048' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 23 18:34:57 compute-0 ceph-mon[75097]: osdmap e3: 0 total, 0 up, 0 in
Jan 23 18:34:57 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:57 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:57 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:34:57 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:34:57 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:58 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'rbd_support'
Jan 23 18:34:58 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:34:58 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:58 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'rgw'
Jan 23 18:34:58 compute-0 sudo[81596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:34:58 compute-0 sudo[81596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:58 compute-0 sudo[81596]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:58 compute-0 sudo[81621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 23 18:34:58 compute-0 sudo[81621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:58 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'rook'
Jan 23 18:34:58 compute-0 sudo[81621]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 23 18:34:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:34:58 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'selftest'
Jan 23 18:34:59 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:59 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'smb'
Jan 23 18:34:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 23 18:34:59 compute-0 ceph-mon[75097]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:34:59 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 23 18:34:59 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 23 18:34:59 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:59 compute-0 ceph-mgr[75390]: [cephadm INFO root] Added host compute-0
Jan 23 18:34:59 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 23 18:34:59 compute-0 ceph-mgr[75390]: [cephadm INFO root] Saving service mon spec with placement compute-0
Jan 23 18:34:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:34:59 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:34:59 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Jan 23 18:34:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 23 18:34:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:34:59 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:34:59 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:34:59 compute-0 ceph-mgr[75390]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Jan 23 18:34:59 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Jan 23 18:34:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 23 18:34:59 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 23 18:34:59 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:59 compute-0 ceph-mgr[75390]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 23 18:34:59 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 23 18:34:59 compute-0 ceph-mgr[75390]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Jan 23 18:34:59 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Jan 23 18:34:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 23 18:34:59 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:59 compute-0 ceph-mgr[75390]: [progress INFO root] update: starting ev 4061d8bd-7629-4e1a-982a-8fa5834963de (Updating mgr deployment (-1 -> 1))
Jan 23 18:34:59 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.hkhaml from compute-0 -- ports [8765]
Jan 23 18:34:59 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.hkhaml from compute-0 -- ports [8765]
Jan 23 18:34:59 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:34:59 compute-0 nostalgic_montalcini[81572]: Added host 'compute-0' with addr '192.168.122.100'
Jan 23 18:34:59 compute-0 nostalgic_montalcini[81572]: Scheduled mon update...
Jan 23 18:34:59 compute-0 nostalgic_montalcini[81572]: Scheduled mgr update...
Jan 23 18:34:59 compute-0 nostalgic_montalcini[81572]: Scheduled osd.default_drive_group update...
Jan 23 18:34:59 compute-0 systemd[1]: libpod-1f3fdefad1e620cf3ce84727be23a51b8a8b5a8b2501b43e2111a36d1b6046f5.scope: Deactivated successfully.
Jan 23 18:34:59 compute-0 podman[81557]: 2026-01-23 18:34:59.354455659 +0000 UTC m=+1.820167168 container died 1f3fdefad1e620cf3ce84727be23a51b8a8b5a8b2501b43e2111a36d1b6046f5 (image=quay.io/ceph/ceph:v20, name=nostalgic_montalcini, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 23 18:34:59 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'snap_schedule'
Jan 23 18:34:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-93369b903dc26d7f0b2b208ba5ec695a6b8af6bfe73d19c20353f5edc4c51aa5-merged.mount: Deactivated successfully.
Jan 23 18:34:59 compute-0 sudo[81666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:34:59 compute-0 sudo[81666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:59 compute-0 sudo[81666]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:59 compute-0 podman[81557]: 2026-01-23 18:34:59.409311414 +0000 UTC m=+1.875022923 container remove 1f3fdefad1e620cf3ce84727be23a51b8a8b5a8b2501b43e2111a36d1b6046f5 (image=quay.io/ceph/ceph:v20, name=nostalgic_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 23 18:34:59 compute-0 systemd[1]: libpod-conmon-1f3fdefad1e620cf3ce84727be23a51b8a8b5a8b2501b43e2111a36d1b6046f5.scope: Deactivated successfully.
Jan 23 18:34:59 compute-0 sudo[81554]: pam_unix(sudo:session): session closed for user root
Jan 23 18:34:59 compute-0 sudo[81703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 rm-daemon --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --name mgr.compute-0.hkhaml --force --tcp-ports 8765
Jan 23 18:34:59 compute-0 sudo[81703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:34:59 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'stats'
Jan 23 18:34:59 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'status'
Jan 23 18:34:59 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'telegraf'
Jan 23 18:34:59 compute-0 sudo[81764]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obnjuhddypymfxzpslkkxfdnpqotfyxd ; /usr/bin/python3'
Jan 23 18:34:59 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'telemetry'
Jan 23 18:34:59 compute-0 sudo[81764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:34:59 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.hkhaml for 4da2adac-d096-5ead-9ec5-3108259ba9e4...
Jan 23 18:34:59 compute-0 ceph-mgr[80623]: mgr[py] Loading python module 'test_orchestrator'
Jan 23 18:34:59 compute-0 python3[81767]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:34:59 compute-0 podman[81793]: 2026-01-23 18:34:59.921017324 +0000 UTC m=+0.045438102 container create 131bbda8cbb23de33a2eb20d17c699275d23d539e440dd7ba441818eafae834d (image=quay.io/ceph/ceph:v20, name=hungry_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 18:34:59 compute-0 systemd[1]: Started libpod-conmon-131bbda8cbb23de33a2eb20d17c699275d23d539e440dd7ba441818eafae834d.scope.
Jan 23 18:34:59 compute-0 podman[81795]: 2026-01-23 18:34:59.974527387 +0000 UTC m=+0.102756810 container died 568b223c0dbbf59be381088fe76cb1eb2d494be57db448d55fbcafb69df32ac9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-hkhaml, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:34:59 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:34:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df742bddf3f009bb874e871f493b4f2ad9556f84bf9bed70288a024bf4c7a38a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df742bddf3f009bb874e871f493b4f2ad9556f84bf9bed70288a024bf4c7a38a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:34:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df742bddf3f009bb874e871f493b4f2ad9556f84bf9bed70288a024bf4c7a38a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:00 compute-0 podman[81793]: 2026-01-23 18:34:59.904321298 +0000 UTC m=+0.028742106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:35:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-3038d510ce785bb077123825e6ea3119a0092354b142b78982f56762b2c11b47-merged.mount: Deactivated successfully.
Jan 23 18:35:00 compute-0 podman[81793]: 2026-01-23 18:35:00.007014965 +0000 UTC m=+0.131435763 container init 131bbda8cbb23de33a2eb20d17c699275d23d539e440dd7ba441818eafae834d (image=quay.io/ceph/ceph:v20, name=hungry_jang, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:35:00 compute-0 podman[81793]: 2026-01-23 18:35:00.015048006 +0000 UTC m=+0.139468784 container start 131bbda8cbb23de33a2eb20d17c699275d23d539e440dd7ba441818eafae834d (image=quay.io/ceph/ceph:v20, name=hungry_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:00 compute-0 podman[81793]: 2026-01-23 18:35:00.027599618 +0000 UTC m=+0.152020396 container attach 131bbda8cbb23de33a2eb20d17c699275d23d539e440dd7ba441818eafae834d (image=quay.io/ceph/ceph:v20, name=hungry_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 18:35:00 compute-0 podman[81795]: 2026-01-23 18:35:00.032001588 +0000 UTC m=+0.160231011 container remove 568b223c0dbbf59be381088fe76cb1eb2d494be57db448d55fbcafb69df32ac9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-hkhaml, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 18:35:00 compute-0 bash[81795]: ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-hkhaml
Jan 23 18:35:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:35:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:35:00 compute-0 systemd[1]: ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4@mgr.compute-0.hkhaml.service: Main process exited, code=exited, status=143/n/a
Jan 23 18:35:00 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:35:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:35:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:35:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:35:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:35:00 compute-0 systemd[1]: ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4@mgr.compute-0.hkhaml.service: Failed with result 'exit-code'.
Jan 23 18:35:00 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.hkhaml for 4da2adac-d096-5ead-9ec5-3108259ba9e4.
Jan 23 18:35:00 compute-0 systemd[1]: ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4@mgr.compute-0.hkhaml.service: Consumed 7.057s CPU time, 422.8M memory peak, read 0B from disk, written 182.0K to disk.
Jan 23 18:35:00 compute-0 systemd[1]: Reloading.
Jan 23 18:35:00 compute-0 ceph-mon[75097]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:00 compute-0 ceph-mon[75097]: Added host compute-0
Jan 23 18:35:00 compute-0 ceph-mon[75097]: Saving service mon spec with placement compute-0
Jan 23 18:35:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:35:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:00 compute-0 ceph-mon[75097]: Saving service mgr spec with placement compute-0
Jan 23 18:35:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:00 compute-0 ceph-mon[75097]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 23 18:35:00 compute-0 ceph-mon[75097]: Saving service osd.default_drive_group spec with placement compute-0
Jan 23 18:35:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:00 compute-0 ceph-mon[75097]: Removing daemon mgr.compute-0.hkhaml from compute-0 -- ports [8765]
Jan 23 18:35:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:00 compute-0 systemd-rc-local-generator[81913]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:35:00 compute-0 systemd-sysv-generator[81917]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:35:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 23 18:35:00 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2041007733' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 23 18:35:00 compute-0 hungry_jang[81825]: 
Jan 23 18:35:00 compute-0 hungry_jang[81825]: {"fsid":"4da2adac-d096-5ead-9ec5-3108259ba9e4","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":53,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-23T18:34:04:461111+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-23T18:34:04.464417+0000","services":{}},"progress_events":{}}
Jan 23 18:35:00 compute-0 systemd[1]: libpod-131bbda8cbb23de33a2eb20d17c699275d23d539e440dd7ba441818eafae834d.scope: Deactivated successfully.
Jan 23 18:35:00 compute-0 podman[81793]: 2026-01-23 18:35:00.559664485 +0000 UTC m=+0.684085293 container died 131bbda8cbb23de33a2eb20d17c699275d23d539e440dd7ba441818eafae834d (image=quay.io/ceph/ceph:v20, name=hungry_jang, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 18:35:00 compute-0 sudo[81703]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:00 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.hkhaml
Jan 23 18:35:00 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.hkhaml
Jan 23 18:35:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.hkhaml"} v 0)
Jan 23 18:35:00 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.hkhaml"} : dispatch
Jan 23 18:35:00 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.hkhaml"}]': finished
Jan 23 18:35:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 23 18:35:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-df742bddf3f009bb874e871f493b4f2ad9556f84bf9bed70288a024bf4c7a38a-merged.mount: Deactivated successfully.
Jan 23 18:35:00 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:00 compute-0 ceph-mgr[75390]: [progress INFO root] complete: finished ev 4061d8bd-7629-4e1a-982a-8fa5834963de (Updating mgr deployment (-1 -> 1))
Jan 23 18:35:00 compute-0 ceph-mgr[75390]: [progress INFO root] Completed event 4061d8bd-7629-4e1a-982a-8fa5834963de (Updating mgr deployment (-1 -> 1)) in 1 seconds
Jan 23 18:35:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 23 18:35:00 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:00 compute-0 podman[81793]: 2026-01-23 18:35:00.613036924 +0000 UTC m=+0.737457702 container remove 131bbda8cbb23de33a2eb20d17c699275d23d539e440dd7ba441818eafae834d (image=quay.io/ceph/ceph:v20, name=hungry_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 18:35:00 compute-0 systemd[1]: libpod-conmon-131bbda8cbb23de33a2eb20d17c699275d23d539e440dd7ba441818eafae834d.scope: Deactivated successfully.
Jan 23 18:35:00 compute-0 sudo[81764]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:00 compute-0 sudo[81945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:35:00 compute-0 sudo[81945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:00 compute-0 sudo[81945]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:00 compute-0 sudo[81970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:35:00 compute-0 sudo[81970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:00 compute-0 sudo[81970]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:00 compute-0 sudo[81995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 23 18:35:00 compute-0 sudo[81995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:01 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2041007733' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 23 18:35:01 compute-0 ceph-mon[75097]: Removing key for mgr.compute-0.hkhaml
Jan 23 18:35:01 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.hkhaml"} : dispatch
Jan 23 18:35:01 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.hkhaml"}]': finished
Jan 23 18:35:01 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:01 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:01 compute-0 ceph-mon[75097]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:01 compute-0 podman[82062]: 2026-01-23 18:35:01.561081367 +0000 UTC m=+0.365873721 container exec 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 23 18:35:01 compute-0 podman[82062]: 2026-01-23 18:35:01.650265146 +0000 UTC m=+0.455057480 container exec_died 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 23 18:35:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:35:02 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:35:02 compute-0 sudo[81995]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:35:02 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:35:02 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:35:02 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:35:02 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:35:02 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:35:02 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:35:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:35:02 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:35:02 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:35:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:35:02 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:35:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:35:02 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:02 compute-0 sudo[82154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:35:02 compute-0 sudo[82154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:02 compute-0 sudo[82154]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:02 compute-0 sudo[82179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:35:02 compute-0 sudo[82179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:02 compute-0 podman[82216]: 2026-01-23 18:35:02.793034129 +0000 UTC m=+0.022100802 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:03 compute-0 podman[82216]: 2026-01-23 18:35:03.147976675 +0000 UTC m=+0.377043338 container create 47771ed0cc980bc7f8da297b57538742f10db91d6e952c7fd0be61e24a75f925 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_benz, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 23 18:35:03 compute-0 systemd[1]: Started libpod-conmon-47771ed0cc980bc7f8da297b57538742f10db91d6e952c7fd0be61e24a75f925.scope.
Jan 23 18:35:03 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:03 compute-0 podman[82216]: 2026-01-23 18:35:03.286414082 +0000 UTC m=+0.515480755 container init 47771ed0cc980bc7f8da297b57538742f10db91d6e952c7fd0be61e24a75f925 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_benz, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:35:03 compute-0 podman[82216]: 2026-01-23 18:35:03.292816032 +0000 UTC m=+0.521882685 container start 47771ed0cc980bc7f8da297b57538742f10db91d6e952c7fd0be61e24a75f925 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 18:35:03 compute-0 podman[82216]: 2026-01-23 18:35:03.29715412 +0000 UTC m=+0.526220803 container attach 47771ed0cc980bc7f8da297b57538742f10db91d6e952c7fd0be61e24a75f925 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:35:03 compute-0 heuristic_benz[82232]: 167 167
Jan 23 18:35:03 compute-0 systemd[1]: libpod-47771ed0cc980bc7f8da297b57538742f10db91d6e952c7fd0be61e24a75f925.scope: Deactivated successfully.
Jan 23 18:35:03 compute-0 podman[82216]: 2026-01-23 18:35:03.299937499 +0000 UTC m=+0.529004212 container died 47771ed0cc980bc7f8da297b57538742f10db91d6e952c7fd0be61e24a75f925 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_benz, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:35:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5503a699545da2440f1af48042464525f68d5d6ccd4c4b99feb127c5c20c5ee4-merged.mount: Deactivated successfully.
Jan 23 18:35:03 compute-0 podman[82216]: 2026-01-23 18:35:03.338024977 +0000 UTC m=+0.567091630 container remove 47771ed0cc980bc7f8da297b57538742f10db91d6e952c7fd0be61e24a75f925 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_benz, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 23 18:35:03 compute-0 systemd[1]: libpod-conmon-47771ed0cc980bc7f8da297b57538742f10db91d6e952c7fd0be61e24a75f925.scope: Deactivated successfully.
Jan 23 18:35:03 compute-0 podman[82255]: 2026-01-23 18:35:03.478100614 +0000 UTC m=+0.022729886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:03 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:03 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:03 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:03 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:03 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:03 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:35:03 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:03 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:35:03 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:35:03 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:03 compute-0 ceph-mon[75097]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:03 compute-0 podman[82255]: 2026-01-23 18:35:03.803447924 +0000 UTC m=+0.348077236 container create 47b162aefc15fde8ddc180817e786796244a1c0b99071c95c66dbcbf0685c1a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_solomon, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:04 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:35:04 compute-0 systemd[1]: Started libpod-conmon-47b162aefc15fde8ddc180817e786796244a1c0b99071c95c66dbcbf0685c1a6.scope.
Jan 23 18:35:04 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28ef97057c19017988e6f0922c213b4391564000b986e8ed799bf9e2c173bdd9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28ef97057c19017988e6f0922c213b4391564000b986e8ed799bf9e2c173bdd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28ef97057c19017988e6f0922c213b4391564000b986e8ed799bf9e2c173bdd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28ef97057c19017988e6f0922c213b4391564000b986e8ed799bf9e2c173bdd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28ef97057c19017988e6f0922c213b4391564000b986e8ed799bf9e2c173bdd9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:04 compute-0 podman[82255]: 2026-01-23 18:35:04.896860876 +0000 UTC m=+1.441490178 container init 47b162aefc15fde8ddc180817e786796244a1c0b99071c95c66dbcbf0685c1a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 23 18:35:04 compute-0 podman[82255]: 2026-01-23 18:35:04.907702716 +0000 UTC m=+1.452332038 container start 47b162aefc15fde8ddc180817e786796244a1c0b99071c95c66dbcbf0685c1a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_solomon, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:05 compute-0 podman[82255]: 2026-01-23 18:35:05.003134073 +0000 UTC m=+1.547763385 container attach 47b162aefc15fde8ddc180817e786796244a1c0b99071c95c66dbcbf0685c1a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_solomon, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 23 18:35:05 compute-0 ceph-mgr[75390]: [progress INFO root] Writing back 3 completed events
Jan 23 18:35:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 18:35:05 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:05 compute-0 gallant_solomon[82271]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:35:05 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:05 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:05 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 066002fe-249e-443f-8741-9dfd30f99103
Jan 23 18:35:06 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:35:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "066002fe-249e-443f-8741-9dfd30f99103"} v 0)
Jan 23 18:35:06 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4260122091' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "066002fe-249e-443f-8741-9dfd30f99103"} : dispatch
Jan 23 18:35:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 23 18:35:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 18:35:06 compute-0 ceph-mon[75097]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:06 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4260122091' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "066002fe-249e-443f-8741-9dfd30f99103"}]': finished
Jan 23 18:35:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 23 18:35:06 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 23 18:35:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 18:35:06 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:06 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 18:35:06 compute-0 gallant_solomon[82271]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 23 18:35:06 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 23 18:35:06 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 23 18:35:06 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:06 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 23 18:35:06 compute-0 lvm[82364]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:35:06 compute-0 lvm[82364]: VG ceph_vg0 finished
Jan 23 18:35:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:35:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 23 18:35:06 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4048633653' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 23 18:35:06 compute-0 gallant_solomon[82271]:  stderr: got monmap epoch 1
Jan 23 18:35:06 compute-0 gallant_solomon[82271]: --> Creating keyring file for osd.0
Jan 23 18:35:06 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 23 18:35:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:06 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 23 18:35:06 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 066002fe-249e-443f-8741-9dfd30f99103 --setuser ceph --setgroup ceph
Jan 23 18:35:07 compute-0 ceph-mon[75097]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 23 18:35:07 compute-0 ceph-mon[75097]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 23 18:35:07 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4260122091' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "066002fe-249e-443f-8741-9dfd30f99103"} : dispatch
Jan 23 18:35:07 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4260122091' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "066002fe-249e-443f-8741-9dfd30f99103"}]': finished
Jan 23 18:35:07 compute-0 ceph-mon[75097]: osdmap e4: 1 total, 0 up, 1 in
Jan 23 18:35:07 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:07 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4048633653' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 23 18:35:07 compute-0 ceph-mon[75097]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:08 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:35:08 compute-0 gallant_solomon[82271]:  stderr: 2026-01-23T18:35:06.929+0000 7fa969f2b8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 23 18:35:08 compute-0 gallant_solomon[82271]:  stderr: 2026-01-23T18:35:06.959+0000 7fa969f2b8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 23 18:35:08 compute-0 gallant_solomon[82271]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 23 18:35:08 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 23 18:35:08 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 23 18:35:08 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:08 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:08 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 23 18:35:08 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 23 18:35:08 compute-0 gallant_solomon[82271]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 23 18:35:08 compute-0 gallant_solomon[82271]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 23 18:35:08 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:08 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:08 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f6ed4487-bd34-4323-9098-4b01a850bb6f
Jan 23 18:35:09 compute-0 ceph-mon[75097]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 23 18:35:09 compute-0 ceph-mon[75097]: Cluster is now healthy
Jan 23 18:35:10 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:35:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "f6ed4487-bd34-4323-9098-4b01a850bb6f"} v 0)
Jan 23 18:35:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4243947694' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "f6ed4487-bd34-4323-9098-4b01a850bb6f"} : dispatch
Jan 23 18:35:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 23 18:35:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 18:35:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4243947694' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f6ed4487-bd34-4323-9098-4b01a850bb6f"}]': finished
Jan 23 18:35:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 23 18:35:10 compute-0 ceph-mon[75097]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:10 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4243947694' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "f6ed4487-bd34-4323-9098-4b01a850bb6f"} : dispatch
Jan 23 18:35:10 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 23 18:35:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 18:35:10 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:10 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 18:35:10 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 18:35:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 18:35:10 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:10 compute-0 gallant_solomon[82271]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 23 18:35:10 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Jan 23 18:35:10 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 23 18:35:10 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:10 compute-0 lvm[83318]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:35:10 compute-0 lvm[83318]: VG ceph_vg1 finished
Jan 23 18:35:10 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 23 18:35:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 23 18:35:10 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3099701462' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 23 18:35:10 compute-0 gallant_solomon[82271]:  stderr: got monmap epoch 1
Jan 23 18:35:10 compute-0 gallant_solomon[82271]: --> Creating keyring file for osd.1
Jan 23 18:35:11 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 23 18:35:11 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 23 18:35:11 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid f6ed4487-bd34-4323-9098-4b01a850bb6f --setuser ceph --setgroup ceph
Jan 23 18:35:11 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4243947694' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f6ed4487-bd34-4323-9098-4b01a850bb6f"}]': finished
Jan 23 18:35:11 compute-0 ceph-mon[75097]: osdmap e5: 2 total, 0 up, 2 in
Jan 23 18:35:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:11 compute-0 ceph-mon[75097]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:11 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3099701462' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 23 18:35:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:35:12 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:35:12 compute-0 gallant_solomon[82271]:  stderr: 2026-01-23T18:35:11.063+0000 7fb6e6c758c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Jan 23 18:35:12 compute-0 gallant_solomon[82271]:  stderr: 2026-01-23T18:35:11.086+0000 7fb6e6c758c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 23 18:35:12 compute-0 gallant_solomon[82271]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Jan 23 18:35:12 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 23 18:35:12 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 23 18:35:12 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:12 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:12 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 23 18:35:12 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 23 18:35:12 compute-0 gallant_solomon[82271]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 23 18:35:12 compute-0 gallant_solomon[82271]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Jan 23 18:35:12 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:12 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:12 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a7717231-3361-44cc-a7c1-2d888f6f7851
Jan 23 18:35:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "a7717231-3361-44cc-a7c1-2d888f6f7851"} v 0)
Jan 23 18:35:12 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/478303736' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "a7717231-3361-44cc-a7c1-2d888f6f7851"} : dispatch
Jan 23 18:35:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 23 18:35:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 18:35:12 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/478303736' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a7717231-3361-44cc-a7c1-2d888f6f7851"}]': finished
Jan 23 18:35:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Jan 23 18:35:12 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Jan 23 18:35:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 18:35:12 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 18:35:12 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:12 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:12 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 18:35:12 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 18:35:12 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:12 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/478303736' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "a7717231-3361-44cc-a7c1-2d888f6f7851"} : dispatch
Jan 23 18:35:12 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/478303736' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a7717231-3361-44cc-a7c1-2d888f6f7851"}]': finished
Jan 23 18:35:12 compute-0 ceph-mon[75097]: osdmap e6: 3 total, 0 up, 3 in
Jan 23 18:35:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:13 compute-0 lvm[84276]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:35:13 compute-0 lvm[84276]: VG ceph_vg2 finished
Jan 23 18:35:13 compute-0 gallant_solomon[82271]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 23 18:35:13 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Jan 23 18:35:13 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 23 18:35:13 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:13 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 23 18:35:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 23 18:35:13 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/807240002' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 23 18:35:13 compute-0 gallant_solomon[82271]:  stderr: got monmap epoch 1
Jan 23 18:35:13 compute-0 gallant_solomon[82271]: --> Creating keyring file for osd.2
Jan 23 18:35:13 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 23 18:35:13 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 23 18:35:13 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid a7717231-3361-44cc-a7c1-2d888f6f7851 --setuser ceph --setgroup ceph
Jan 23 18:35:13 compute-0 ceph-mon[75097]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:13 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/807240002' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 23 18:35:14 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:35:14 compute-0 gallant_solomon[82271]:  stderr: 2026-01-23T18:35:13.729+0000 7f69249e98c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Jan 23 18:35:14 compute-0 gallant_solomon[82271]:  stderr: 2026-01-23T18:35:13.748+0000 7f69249e98c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 23 18:35:14 compute-0 gallant_solomon[82271]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Jan 23 18:35:14 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 23 18:35:14 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 23 18:35:14 compute-0 gallant_solomon[82271]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:14 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:14 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 23 18:35:14 compute-0 gallant_solomon[82271]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 23 18:35:14 compute-0 gallant_solomon[82271]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 23 18:35:14 compute-0 gallant_solomon[82271]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Jan 23 18:35:14 compute-0 systemd[1]: libpod-47b162aefc15fde8ddc180817e786796244a1c0b99071c95c66dbcbf0685c1a6.scope: Deactivated successfully.
Jan 23 18:35:14 compute-0 systemd[1]: libpod-47b162aefc15fde8ddc180817e786796244a1c0b99071c95c66dbcbf0685c1a6.scope: Consumed 5.910s CPU time.
Jan 23 18:35:14 compute-0 podman[82255]: 2026-01-23 18:35:14.757913856 +0000 UTC m=+11.302543128 container died 47b162aefc15fde8ddc180817e786796244a1c0b99071c95c66dbcbf0685c1a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 23 18:35:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-28ef97057c19017988e6f0922c213b4391564000b986e8ed799bf9e2c173bdd9-merged.mount: Deactivated successfully.
Jan 23 18:35:14 compute-0 podman[82255]: 2026-01-23 18:35:14.807129121 +0000 UTC m=+11.351758393 container remove 47b162aefc15fde8ddc180817e786796244a1c0b99071c95c66dbcbf0685c1a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 18:35:14 compute-0 systemd[1]: libpod-conmon-47b162aefc15fde8ddc180817e786796244a1c0b99071c95c66dbcbf0685c1a6.scope: Deactivated successfully.
Jan 23 18:35:14 compute-0 sudo[82179]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:14 compute-0 sudo[85212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:35:14 compute-0 sudo[85212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:14 compute-0 sudo[85212]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:14 compute-0 sudo[85237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:35:14 compute-0 sudo[85237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:15 compute-0 podman[85273]: 2026-01-23 18:35:15.263966556 +0000 UTC m=+0.045022553 container create 03ee5f1487b686dd39d1fba09a291c7ff3ef3b6783dc4fab45b9ae48a4c4973a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 18:35:15 compute-0 systemd[1]: Started libpod-conmon-03ee5f1487b686dd39d1fba09a291c7ff3ef3b6783dc4fab45b9ae48a4c4973a.scope.
Jan 23 18:35:15 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:15 compute-0 podman[85273]: 2026-01-23 18:35:15.242667215 +0000 UTC m=+0.023723262 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:15 compute-0 podman[85273]: 2026-01-23 18:35:15.348010398 +0000 UTC m=+0.129066415 container init 03ee5f1487b686dd39d1fba09a291c7ff3ef3b6783dc4fab45b9ae48a4c4973a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:35:15 compute-0 podman[85273]: 2026-01-23 18:35:15.356987471 +0000 UTC m=+0.138043468 container start 03ee5f1487b686dd39d1fba09a291c7ff3ef3b6783dc4fab45b9ae48a4c4973a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moser, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:35:15 compute-0 interesting_moser[85289]: 167 167
Jan 23 18:35:15 compute-0 systemd[1]: libpod-03ee5f1487b686dd39d1fba09a291c7ff3ef3b6783dc4fab45b9ae48a4c4973a.scope: Deactivated successfully.
Jan 23 18:35:15 compute-0 podman[85273]: 2026-01-23 18:35:15.364353234 +0000 UTC m=+0.145409321 container attach 03ee5f1487b686dd39d1fba09a291c7ff3ef3b6783dc4fab45b9ae48a4c4973a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:35:15 compute-0 podman[85273]: 2026-01-23 18:35:15.365056012 +0000 UTC m=+0.146112039 container died 03ee5f1487b686dd39d1fba09a291c7ff3ef3b6783dc4fab45b9ae48a4c4973a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:35:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ec30c34d2f74d9da7800cc6f1afe6461e956ec64bc3cbe027ef172f1d439be4-merged.mount: Deactivated successfully.
Jan 23 18:35:15 compute-0 podman[85273]: 2026-01-23 18:35:15.411899448 +0000 UTC m=+0.192955445 container remove 03ee5f1487b686dd39d1fba09a291c7ff3ef3b6783dc4fab45b9ae48a4c4973a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moser, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:35:15 compute-0 systemd[1]: libpod-conmon-03ee5f1487b686dd39d1fba09a291c7ff3ef3b6783dc4fab45b9ae48a4c4973a.scope: Deactivated successfully.
Jan 23 18:35:15 compute-0 podman[85313]: 2026-01-23 18:35:15.593782477 +0000 UTC m=+0.071894081 container create b36646484eb8a2fe4acf40b7fc82660bd217eede0844725604ae5b11db6378ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:35:15 compute-0 podman[85313]: 2026-01-23 18:35:15.550722544 +0000 UTC m=+0.028834158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:15 compute-0 systemd[1]: Started libpod-conmon-b36646484eb8a2fe4acf40b7fc82660bd217eede0844725604ae5b11db6378ab.scope.
Jan 23 18:35:15 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95e9d7cf2467cf1c13551936a618ad3e3adcfebf540c4d2857ea821d8767d10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95e9d7cf2467cf1c13551936a618ad3e3adcfebf540c4d2857ea821d8767d10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95e9d7cf2467cf1c13551936a618ad3e3adcfebf540c4d2857ea821d8767d10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95e9d7cf2467cf1c13551936a618ad3e3adcfebf540c4d2857ea821d8767d10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:15 compute-0 podman[85313]: 2026-01-23 18:35:15.711783094 +0000 UTC m=+0.189894668 container init b36646484eb8a2fe4acf40b7fc82660bd217eede0844725604ae5b11db6378ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 23 18:35:15 compute-0 podman[85313]: 2026-01-23 18:35:15.72204227 +0000 UTC m=+0.200153834 container start b36646484eb8a2fe4acf40b7fc82660bd217eede0844725604ae5b11db6378ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_burnell, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:35:15 compute-0 podman[85313]: 2026-01-23 18:35:15.734942311 +0000 UTC m=+0.213053905 container attach b36646484eb8a2fe4acf40b7fc82660bd217eede0844725604ae5b11db6378ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:15 compute-0 ceph-mon[75097]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:16 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:35:16 compute-0 hungry_burnell[85330]: {
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:     "0": [
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:         {
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "devices": [
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "/dev/loop3"
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             ],
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "lv_name": "ceph_lv0",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "lv_size": "21470642176",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "name": "ceph_lv0",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "tags": {
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.cluster_name": "ceph",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.crush_device_class": "",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.encrypted": "0",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.objectstore": "bluestore",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.osd_id": "0",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.type": "block",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.vdo": "0",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.with_tpm": "0"
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             },
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "type": "block",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "vg_name": "ceph_vg0"
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:         }
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:     ],
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:     "1": [
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:         {
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "devices": [
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "/dev/loop4"
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             ],
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "lv_name": "ceph_lv1",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "lv_size": "21470642176",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "name": "ceph_lv1",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "tags": {
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.cluster_name": "ceph",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.crush_device_class": "",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.encrypted": "0",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.objectstore": "bluestore",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.osd_id": "1",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.type": "block",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.vdo": "0",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.with_tpm": "0"
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             },
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "type": "block",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "vg_name": "ceph_vg1"
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:         }
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:     ],
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:     "2": [
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:         {
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "devices": [
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "/dev/loop5"
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             ],
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "lv_name": "ceph_lv2",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "lv_size": "21470642176",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "name": "ceph_lv2",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "tags": {
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.cluster_name": "ceph",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.crush_device_class": "",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.encrypted": "0",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.objectstore": "bluestore",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.osd_id": "2",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.type": "block",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.vdo": "0",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:                 "ceph.with_tpm": "0"
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             },
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "type": "block",
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:             "vg_name": "ceph_vg2"
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:         }
Jan 23 18:35:16 compute-0 hungry_burnell[85330]:     ]
Jan 23 18:35:16 compute-0 hungry_burnell[85330]: }
Jan 23 18:35:16 compute-0 systemd[1]: libpod-b36646484eb8a2fe4acf40b7fc82660bd217eede0844725604ae5b11db6378ab.scope: Deactivated successfully.
Jan 23 18:35:16 compute-0 podman[85313]: 2026-01-23 18:35:16.07390065 +0000 UTC m=+0.552012234 container died b36646484eb8a2fe4acf40b7fc82660bd217eede0844725604ae5b11db6378ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 18:35:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-b95e9d7cf2467cf1c13551936a618ad3e3adcfebf540c4d2857ea821d8767d10-merged.mount: Deactivated successfully.
Jan 23 18:35:16 compute-0 podman[85313]: 2026-01-23 18:35:16.113398763 +0000 UTC m=+0.591510327 container remove b36646484eb8a2fe4acf40b7fc82660bd217eede0844725604ae5b11db6378ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 23 18:35:16 compute-0 systemd[1]: libpod-conmon-b36646484eb8a2fe4acf40b7fc82660bd217eede0844725604ae5b11db6378ab.scope: Deactivated successfully.
Jan 23 18:35:16 compute-0 sudo[85237]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 23 18:35:16 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 23 18:35:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:35:16 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:16 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 23 18:35:16 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 23 18:35:16 compute-0 sudo[85349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:35:16 compute-0 sudo[85349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:16 compute-0 sudo[85349]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:16 compute-0 sudo[85374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:35:16 compute-0 sudo[85374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:16 compute-0 podman[85437]: 2026-01-23 18:35:16.640399745 +0000 UTC m=+0.037074205 container create 937aaf913b62fa3a29f348660f1ab42235371bbd36f3d7819a0791605edc8409 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 18:35:16 compute-0 systemd[1]: Started libpod-conmon-937aaf913b62fa3a29f348660f1ab42235371bbd36f3d7819a0791605edc8409.scope.
Jan 23 18:35:16 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:16 compute-0 podman[85437]: 2026-01-23 18:35:16.714760156 +0000 UTC m=+0.111434646 container init 937aaf913b62fa3a29f348660f1ab42235371bbd36f3d7819a0791605edc8409 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:16 compute-0 podman[85437]: 2026-01-23 18:35:16.623445892 +0000 UTC m=+0.020120382 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:16 compute-0 podman[85437]: 2026-01-23 18:35:16.721008461 +0000 UTC m=+0.117682921 container start 937aaf913b62fa3a29f348660f1ab42235371bbd36f3d7819a0791605edc8409 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lumiere, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:35:16 compute-0 podman[85437]: 2026-01-23 18:35:16.72539874 +0000 UTC m=+0.122073210 container attach 937aaf913b62fa3a29f348660f1ab42235371bbd36f3d7819a0791605edc8409 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lumiere, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:16 compute-0 charming_lumiere[85453]: 167 167
Jan 23 18:35:16 compute-0 systemd[1]: libpod-937aaf913b62fa3a29f348660f1ab42235371bbd36f3d7819a0791605edc8409.scope: Deactivated successfully.
Jan 23 18:35:16 compute-0 podman[85437]: 2026-01-23 18:35:16.727037491 +0000 UTC m=+0.123711951 container died 937aaf913b62fa3a29f348660f1ab42235371bbd36f3d7819a0791605edc8409 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lumiere, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:35:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:35:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a5bb3f7f19bcf52f7b01afc447e109f6905597228d8936ec8cd0e144cb05af4-merged.mount: Deactivated successfully.
Jan 23 18:35:16 compute-0 podman[85437]: 2026-01-23 18:35:16.80169774 +0000 UTC m=+0.198372190 container remove 937aaf913b62fa3a29f348660f1ab42235371bbd36f3d7819a0791605edc8409 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:16 compute-0 systemd[1]: libpod-conmon-937aaf913b62fa3a29f348660f1ab42235371bbd36f3d7819a0791605edc8409.scope: Deactivated successfully.
Jan 23 18:35:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:16 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 23 18:35:16 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:17 compute-0 podman[85485]: 2026-01-23 18:35:17.030678781 +0000 UTC m=+0.045876573 container create 2edba844f558d25370d995002666cad9d2351254f9b79e84022281f293246ff3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate-test, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 18:35:17 compute-0 systemd[1]: Started libpod-conmon-2edba844f558d25370d995002666cad9d2351254f9b79e84022281f293246ff3.scope.
Jan 23 18:35:17 compute-0 podman[85485]: 2026-01-23 18:35:17.01294585 +0000 UTC m=+0.028143632 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:17 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d38da3fdf1c45508f74eb098995439804ef0aa2cbe84711ab478b7b96fad05d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d38da3fdf1c45508f74eb098995439804ef0aa2cbe84711ab478b7b96fad05d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d38da3fdf1c45508f74eb098995439804ef0aa2cbe84711ab478b7b96fad05d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d38da3fdf1c45508f74eb098995439804ef0aa2cbe84711ab478b7b96fad05d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d38da3fdf1c45508f74eb098995439804ef0aa2cbe84711ab478b7b96fad05d/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:17 compute-0 podman[85485]: 2026-01-23 18:35:17.130583349 +0000 UTC m=+0.145781131 container init 2edba844f558d25370d995002666cad9d2351254f9b79e84022281f293246ff3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 23 18:35:17 compute-0 podman[85485]: 2026-01-23 18:35:17.137634084 +0000 UTC m=+0.152831846 container start 2edba844f558d25370d995002666cad9d2351254f9b79e84022281f293246ff3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate-test, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:35:17 compute-0 podman[85485]: 2026-01-23 18:35:17.14149871 +0000 UTC m=+0.156696493 container attach 2edba844f558d25370d995002666cad9d2351254f9b79e84022281f293246ff3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:35:17 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate-test[85501]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 23 18:35:17 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate-test[85501]:                             [--no-systemd] [--no-tmpfs]
Jan 23 18:35:17 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate-test[85501]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 23 18:35:17 compute-0 systemd[1]: libpod-2edba844f558d25370d995002666cad9d2351254f9b79e84022281f293246ff3.scope: Deactivated successfully.
Jan 23 18:35:17 compute-0 podman[85485]: 2026-01-23 18:35:17.322064025 +0000 UTC m=+0.337261797 container died 2edba844f558d25370d995002666cad9d2351254f9b79e84022281f293246ff3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate-test, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 18:35:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d38da3fdf1c45508f74eb098995439804ef0aa2cbe84711ab478b7b96fad05d-merged.mount: Deactivated successfully.
Jan 23 18:35:17 compute-0 podman[85485]: 2026-01-23 18:35:17.379253709 +0000 UTC m=+0.394451471 container remove 2edba844f558d25370d995002666cad9d2351254f9b79e84022281f293246ff3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate-test, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:17 compute-0 systemd[1]: libpod-conmon-2edba844f558d25370d995002666cad9d2351254f9b79e84022281f293246ff3.scope: Deactivated successfully.
Jan 23 18:35:17 compute-0 systemd[1]: Reloading.
Jan 23 18:35:17 compute-0 systemd-rc-local-generator[85569]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:35:17 compute-0 systemd-sysv-generator[85572]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:35:17 compute-0 systemd[1]: Reloading.
Jan 23 18:35:17 compute-0 ceph-mon[75097]: Deploying daemon osd.0 on compute-0
Jan 23 18:35:17 compute-0 ceph-mon[75097]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:18 compute-0 systemd-rc-local-generator[85603]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:35:18 compute-0 systemd-sysv-generator[85606]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:35:18 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:35:18 compute-0 systemd[1]: Starting Ceph osd.0 for 4da2adac-d096-5ead-9ec5-3108259ba9e4...
Jan 23 18:35:18 compute-0 podman[85661]: 2026-01-23 18:35:18.469342299 +0000 UTC m=+0.062580919 container create 30c9060e9f5147de93fdbc95a1746e94ba83448927b75bd88d1fe75acfe44c6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:35:18 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:18 compute-0 podman[85661]: 2026-01-23 18:35:18.444506961 +0000 UTC m=+0.037745671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2300e07cee7b80a50a995a24a7df3dffc03e22e4b076233d7fa09ddaffac6b7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2300e07cee7b80a50a995a24a7df3dffc03e22e4b076233d7fa09ddaffac6b7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2300e07cee7b80a50a995a24a7df3dffc03e22e4b076233d7fa09ddaffac6b7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2300e07cee7b80a50a995a24a7df3dffc03e22e4b076233d7fa09ddaffac6b7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2300e07cee7b80a50a995a24a7df3dffc03e22e4b076233d7fa09ddaffac6b7f/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:18 compute-0 podman[85661]: 2026-01-23 18:35:18.561074433 +0000 UTC m=+0.154313093 container init 30c9060e9f5147de93fdbc95a1746e94ba83448927b75bd88d1fe75acfe44c6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 23 18:35:18 compute-0 podman[85661]: 2026-01-23 18:35:18.579000909 +0000 UTC m=+0.172239569 container start 30c9060e9f5147de93fdbc95a1746e94ba83448927b75bd88d1fe75acfe44c6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:18 compute-0 podman[85661]: 2026-01-23 18:35:18.583697737 +0000 UTC m=+0.176936357 container attach 30c9060e9f5147de93fdbc95a1746e94ba83448927b75bd88d1fe75acfe44c6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 23 18:35:18 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate[85676]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:18 compute-0 bash[85661]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:18 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate[85676]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:18 compute-0 bash[85661]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:19 compute-0 lvm[85762]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:35:19 compute-0 lvm[85762]: VG ceph_vg1 finished
Jan 23 18:35:19 compute-0 lvm[85761]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:35:19 compute-0 lvm[85761]: VG ceph_vg0 finished
Jan 23 18:35:19 compute-0 lvm[85764]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:35:19 compute-0 lvm[85764]: VG ceph_vg2 finished
Jan 23 18:35:19 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate[85676]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 23 18:35:19 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate[85676]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:19 compute-0 bash[85661]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 23 18:35:19 compute-0 bash[85661]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:19 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate[85676]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:19 compute-0 bash[85661]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:19 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate[85676]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 23 18:35:19 compute-0 bash[85661]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 23 18:35:19 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate[85676]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 23 18:35:19 compute-0 bash[85661]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 23 18:35:19 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate[85676]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:19 compute-0 bash[85661]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:19 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate[85676]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:19 compute-0 bash[85661]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:19 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate[85676]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 23 18:35:19 compute-0 bash[85661]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 23 18:35:19 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate[85676]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 23 18:35:19 compute-0 bash[85661]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 23 18:35:19 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate[85676]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 23 18:35:19 compute-0 bash[85661]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 23 18:35:19 compute-0 systemd[1]: libpod-30c9060e9f5147de93fdbc95a1746e94ba83448927b75bd88d1fe75acfe44c6e.scope: Deactivated successfully.
Jan 23 18:35:19 compute-0 podman[85661]: 2026-01-23 18:35:19.754052584 +0000 UTC m=+1.347291214 container died 30c9060e9f5147de93fdbc95a1746e94ba83448927b75bd88d1fe75acfe44c6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:35:19 compute-0 systemd[1]: libpod-30c9060e9f5147de93fdbc95a1746e94ba83448927b75bd88d1fe75acfe44c6e.scope: Consumed 1.644s CPU time.
Jan 23 18:35:19 compute-0 ceph-mon[75097]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-2300e07cee7b80a50a995a24a7df3dffc03e22e4b076233d7fa09ddaffac6b7f-merged.mount: Deactivated successfully.
Jan 23 18:35:20 compute-0 podman[85661]: 2026-01-23 18:35:20.043977522 +0000 UTC m=+1.637216172 container remove 30c9060e9f5147de93fdbc95a1746e94ba83448927b75bd88d1fe75acfe44c6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 23 18:35:20 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:35:20 compute-0 podman[85933]: 2026-01-23 18:35:20.386369127 +0000 UTC m=+0.070713541 container create 41ddd2197559a43c1f66b5e4814e04e867506016f16ead227adcf4a4d4206027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 23 18:35:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc644d27ad0ee6e9fc9a7b06957b2e7e8be71bcc43e0d38404e6660873cabf2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc644d27ad0ee6e9fc9a7b06957b2e7e8be71bcc43e0d38404e6660873cabf2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc644d27ad0ee6e9fc9a7b06957b2e7e8be71bcc43e0d38404e6660873cabf2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc644d27ad0ee6e9fc9a7b06957b2e7e8be71bcc43e0d38404e6660873cabf2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc644d27ad0ee6e9fc9a7b06957b2e7e8be71bcc43e0d38404e6660873cabf2f/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:20 compute-0 podman[85933]: 2026-01-23 18:35:20.356922174 +0000 UTC m=+0.041266658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:20 compute-0 podman[85933]: 2026-01-23 18:35:20.462508502 +0000 UTC m=+0.146853006 container init 41ddd2197559a43c1f66b5e4814e04e867506016f16ead227adcf4a4d4206027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 18:35:20 compute-0 podman[85933]: 2026-01-23 18:35:20.473218199 +0000 UTC m=+0.157562643 container start 41ddd2197559a43c1f66b5e4814e04e867506016f16ead227adcf4a4d4206027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:20 compute-0 bash[85933]: 41ddd2197559a43c1f66b5e4814e04e867506016f16ead227adcf4a4d4206027
Jan 23 18:35:20 compute-0 systemd[1]: Started Ceph osd.0 for 4da2adac-d096-5ead-9ec5-3108259ba9e4.
Jan 23 18:35:20 compute-0 ceph-osd[85953]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 18:35:20 compute-0 ceph-osd[85953]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: pidfile_write: ignore empty --pid-file
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) close
Jan 23 18:35:20 compute-0 sudo[85374]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) close
Jan 23 18:35:20 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:35:20 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 23 18:35:20 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 23 18:35:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:35:20 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:20 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 23 18:35:20 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) close
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) close
Jan 23 18:35:20 compute-0 sudo[85967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:35:20 compute-0 sudo[85967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:20 compute-0 sudo[85967]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) close
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952400 /var/lib/ceph/osd/ceph-0/block) close
Jan 23 18:35:20 compute-0 sudo[85996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:35:20 compute-0 sudo[85996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771952000 /var/lib/ceph/osd/ceph-0/block) close
Jan 23 18:35:20 compute-0 ceph-osd[85953]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 23 18:35:20 compute-0 ceph-osd[85953]: load: jerasure load: lrc 
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 23 18:35:20 compute-0 ceph-osd[85953]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 23 18:35:20 compute-0 ceph-osd[85953]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 23 18:35:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x558771953c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x5587725e9800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x5587725e9800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x5587725e9800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bdev(0x5587725e9800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluefs mount
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluefs mount shared_bdev_used = 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: RocksDB version: 7.9.2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Git sha 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: DB SUMMARY
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: DB Session ID:  31Q59DYRY58T1AM0CILG
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: CURRENT file:  CURRENT
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: IDENTITY file:  IDENTITY
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                         Options.error_if_exists: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                       Options.create_if_missing: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                         Options.paranoid_checks: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                                     Options.env: 0x5587717e3f80
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                                Options.info_log: 0x5587728348a0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.max_file_opening_threads: 16
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                              Options.statistics: (nil)
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                               Options.use_fsync: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                       Options.max_log_file_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                         Options.allow_fallocate: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.use_direct_reads: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.create_missing_column_families: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                              Options.db_log_dir: 
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                                 Options.wal_dir: db.wal
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.advise_random_on_open: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.write_buffer_manager: 0x558771848b40
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                            Options.rate_limiter: (nil)
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.unordered_write: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                               Options.row_cache: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                              Options.wal_filter: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.allow_ingest_behind: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.two_write_queues: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.manual_wal_flush: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.wal_compression: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.atomic_flush: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                 Options.log_readahead_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.allow_data_in_errors: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.db_host_id: __hostname__
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.max_background_jobs: 4
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.max_background_compactions: -1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.max_subcompactions: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.max_open_files: -1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.bytes_per_sync: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.max_background_flushes: -1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Compression algorithms supported:
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         kZSTD supported: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         kXpressCompression supported: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         kBZip2Compression supported: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         kLZ4Compression supported: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         kZlibCompression supported: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         kLZ4HCCompression supported: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         kSnappyCompression supported: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772834c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772834c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772834c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772834c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772834c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772834c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772834c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772834c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772834c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772834c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 964ad495-499a-4df4-a853-88099d1493af
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193320995127, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 23 18:35:20 compute-0 ceph-osd[85953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193320997695, "job": 1, "event": "recovery_finished"}
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 23 18:35:20 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: freelist init
Jan 23 18:35:21 compute-0 ceph-osd[85953]: freelist _read_cfg
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bluefs umount
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bdev(0x5587725e9800 /var/lib/ceph/osd/ceph-0/block) close
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bdev(0x5587725e9800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bdev(0x5587725e9800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bdev(0x5587725e9800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bdev(0x5587725e9800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bluefs mount
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bluefs mount shared_bdev_used = 27262976
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: RocksDB version: 7.9.2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Git sha 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: DB SUMMARY
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: DB Session ID:  31Q59DYRY58T1AM0CILH
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: CURRENT file:  CURRENT
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: IDENTITY file:  IDENTITY
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                         Options.error_if_exists: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                       Options.create_if_missing: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                         Options.paranoid_checks: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                                     Options.env: 0x5587717e39d0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                                Options.info_log: 0x558772835aa0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.max_file_opening_threads: 16
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                              Options.statistics: (nil)
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                               Options.use_fsync: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                       Options.max_log_file_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                         Options.allow_fallocate: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.use_direct_reads: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.create_missing_column_families: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                              Options.db_log_dir: 
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                                 Options.wal_dir: db.wal
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.advise_random_on_open: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.write_buffer_manager: 0x558771849900
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                            Options.rate_limiter: (nil)
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.unordered_write: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                               Options.row_cache: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                              Options.wal_filter: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.allow_ingest_behind: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.two_write_queues: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.manual_wal_flush: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.wal_compression: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.atomic_flush: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                 Options.log_readahead_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.allow_data_in_errors: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.db_host_id: __hostname__
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.max_background_jobs: 4
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.max_background_compactions: -1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.max_subcompactions: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.max_open_files: -1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.bytes_per_sync: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.max_background_flushes: -1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Compression algorithms supported:
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         kZSTD supported: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         kXpressCompression supported: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         kBZip2Compression supported: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         kLZ4Compression supported: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         kZlibCompression supported: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         kLZ4HCCompression supported: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         kSnappyCompression supported: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772835ea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772835ea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772835ea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772835ea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772835ea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772835ea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772835ea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772835ec0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e74b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772835ec0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e74b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558772835ec0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5587717e74b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 964ad495-499a-4df4-a853-88099d1493af
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193321049859, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193321054586, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193321, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "964ad495-499a-4df4-a853-88099d1493af", "db_session_id": "31Q59DYRY58T1AM0CILH", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193321058477, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193321, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "964ad495-499a-4df4-a853-88099d1493af", "db_session_id": "31Q59DYRY58T1AM0CILH", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193321061797, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193321, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "964ad495-499a-4df4-a853-88099d1493af", "db_session_id": "31Q59DYRY58T1AM0CILH", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193321064004, "job": 1, "event": "recovery_finished"}
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 23 18:35:21 compute-0 podman[86286]: 2026-01-23 18:35:21.082838577 +0000 UTC m=+0.036326395 container create fd124b0eb647441a0e1b13b1d382e337c15095ba09995eabfb596a76af28ce59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558772a19c00
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: DB pointer 0x5587729ee000
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 23 18:35:21 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 18:35:21 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e74b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e74b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e74b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 18:35:21 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 23 18:35:21 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 23 18:35:21 compute-0 ceph-osd[85953]: _get_class not permitted to load lua
Jan 23 18:35:21 compute-0 ceph-osd[85953]: _get_class not permitted to load sdk
Jan 23 18:35:21 compute-0 ceph-osd[85953]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 23 18:35:21 compute-0 ceph-osd[85953]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 23 18:35:21 compute-0 ceph-osd[85953]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 23 18:35:21 compute-0 ceph-osd[85953]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 23 18:35:21 compute-0 ceph-osd[85953]: osd.0 0 load_pgs
Jan 23 18:35:21 compute-0 ceph-osd[85953]: osd.0 0 load_pgs opened 0 pgs
Jan 23 18:35:21 compute-0 ceph-osd[85953]: osd.0 0 log_to_monitors true
Jan 23 18:35:21 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0[85949]: 2026-01-23T18:35:21.106+0000 7fcf190428c0 -1 osd.0 0 log_to_monitors true
Jan 23 18:35:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 23 18:35:21 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2499281510,v1:192.168.122.100:6803/2499281510]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 23 18:35:21 compute-0 systemd[1]: Started libpod-conmon-fd124b0eb647441a0e1b13b1d382e337c15095ba09995eabfb596a76af28ce59.scope.
Jan 23 18:35:21 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:21 compute-0 podman[86286]: 2026-01-23 18:35:21.159091875 +0000 UTC m=+0.112579693 container init fd124b0eb647441a0e1b13b1d382e337c15095ba09995eabfb596a76af28ce59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:21 compute-0 podman[86286]: 2026-01-23 18:35:21.066311866 +0000 UTC m=+0.019799694 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:21 compute-0 podman[86286]: 2026-01-23 18:35:21.167037854 +0000 UTC m=+0.120525692 container start fd124b0eb647441a0e1b13b1d382e337c15095ba09995eabfb596a76af28ce59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 18:35:21 compute-0 podman[86286]: 2026-01-23 18:35:21.170766276 +0000 UTC m=+0.124254094 container attach fd124b0eb647441a0e1b13b1d382e337c15095ba09995eabfb596a76af28ce59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mcnulty, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:35:21 compute-0 heuristic_mcnulty[86509]: 167 167
Jan 23 18:35:21 compute-0 systemd[1]: libpod-fd124b0eb647441a0e1b13b1d382e337c15095ba09995eabfb596a76af28ce59.scope: Deactivated successfully.
Jan 23 18:35:21 compute-0 podman[86286]: 2026-01-23 18:35:21.173272658 +0000 UTC m=+0.126760486 container died fd124b0eb647441a0e1b13b1d382e337c15095ba09995eabfb596a76af28ce59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e156052b339d269d48a05969071314340b464eeed79687a5feda56d9107fdddd-merged.mount: Deactivated successfully.
Jan 23 18:35:21 compute-0 podman[86286]: 2026-01-23 18:35:21.217780437 +0000 UTC m=+0.171268255 container remove fd124b0eb647441a0e1b13b1d382e337c15095ba09995eabfb596a76af28ce59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mcnulty, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:35:21 compute-0 systemd[1]: libpod-conmon-fd124b0eb647441a0e1b13b1d382e337c15095ba09995eabfb596a76af28ce59.scope: Deactivated successfully.
Jan 23 18:35:21 compute-0 podman[86538]: 2026-01-23 18:35:21.533896067 +0000 UTC m=+0.067557304 container create 8348cd5d558f0b1229fa0d0bbb880df3a297439df5c27053c5395e152d48e435 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 23 18:35:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 23 18:35:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:21 compute-0 ceph-mon[75097]: Deploying daemon osd.1 on compute-0
Jan 23 18:35:21 compute-0 ceph-mon[75097]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:21 compute-0 ceph-mon[75097]: from='osd.0 [v2:192.168.122.100:6802/2499281510,v1:192.168.122.100:6803/2499281510]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 23 18:35:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 23 18:35:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 18:35:21 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2499281510,v1:192.168.122.100:6803/2499281510]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 23 18:35:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Jan 23 18:35:21 compute-0 systemd[1]: Started libpod-conmon-8348cd5d558f0b1229fa0d0bbb880df3a297439df5c27053c5395e152d48e435.scope.
Jan 23 18:35:21 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Jan 23 18:35:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 23 18:35:21 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2499281510,v1:192.168.122.100:6803/2499281510]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 23 18:35:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 23 18:35:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 18:35:21 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 18:35:21 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:21 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:21 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 18:35:21 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 18:35:21 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:21 compute-0 podman[86538]: 2026-01-23 18:35:21.511556661 +0000 UTC m=+0.045217908 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:21 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c0ba9e45fb02e0c406f6e4b07724019a6cc279bbcdc96e85e5c26bf1de1ff7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c0ba9e45fb02e0c406f6e4b07724019a6cc279bbcdc96e85e5c26bf1de1ff7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c0ba9e45fb02e0c406f6e4b07724019a6cc279bbcdc96e85e5c26bf1de1ff7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c0ba9e45fb02e0c406f6e4b07724019a6cc279bbcdc96e85e5c26bf1de1ff7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c0ba9e45fb02e0c406f6e4b07724019a6cc279bbcdc96e85e5c26bf1de1ff7/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:21 compute-0 podman[86538]: 2026-01-23 18:35:21.635778073 +0000 UTC m=+0.169439310 container init 8348cd5d558f0b1229fa0d0bbb880df3a297439df5c27053c5395e152d48e435 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 23 18:35:21 compute-0 podman[86538]: 2026-01-23 18:35:21.646662034 +0000 UTC m=+0.180323231 container start 8348cd5d558f0b1229fa0d0bbb880df3a297439df5c27053c5395e152d48e435 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate-test, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 23 18:35:21 compute-0 podman[86538]: 2026-01-23 18:35:21.650795687 +0000 UTC m=+0.184456934 container attach 8348cd5d558f0b1229fa0d0bbb880df3a297439df5c27053c5395e152d48e435 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate-test, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 18:35:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:35:21 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate-test[86555]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 23 18:35:21 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate-test[86555]:                             [--no-systemd] [--no-tmpfs]
Jan 23 18:35:21 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate-test[86555]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 23 18:35:21 compute-0 systemd[1]: libpod-8348cd5d558f0b1229fa0d0bbb880df3a297439df5c27053c5395e152d48e435.scope: Deactivated successfully.
Jan 23 18:35:21 compute-0 podman[86538]: 2026-01-23 18:35:21.8448966 +0000 UTC m=+0.378557807 container died 8348cd5d558f0b1229fa0d0bbb880df3a297439df5c27053c5395e152d48e435 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate-test, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0c0ba9e45fb02e0c406f6e4b07724019a6cc279bbcdc96e85e5c26bf1de1ff7-merged.mount: Deactivated successfully.
Jan 23 18:35:21 compute-0 podman[86538]: 2026-01-23 18:35:21.895719835 +0000 UTC m=+0.429381072 container remove 8348cd5d558f0b1229fa0d0bbb880df3a297439df5c27053c5395e152d48e435 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 23 18:35:21 compute-0 systemd[1]: libpod-conmon-8348cd5d558f0b1229fa0d0bbb880df3a297439df5c27053c5395e152d48e435.scope: Deactivated successfully.
Jan 23 18:35:22 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:35:22 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 23 18:35:22 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 23 18:35:22 compute-0 systemd[1]: Reloading.
Jan 23 18:35:22 compute-0 systemd-sysv-generator[86623]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:35:22 compute-0 systemd-rc-local-generator[86620]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:35:22 compute-0 systemd[1]: Reloading.
Jan 23 18:35:22 compute-0 systemd-sysv-generator[86661]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:35:22 compute-0 systemd-rc-local-generator[86657]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:35:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 23 18:35:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 18:35:22 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2499281510,v1:192.168.122.100:6803/2499281510]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 23 18:35:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Jan 23 18:35:22 compute-0 ceph-osd[85953]: osd.0 0 done with init, starting boot process
Jan 23 18:35:22 compute-0 ceph-osd[85953]: osd.0 0 start_boot
Jan 23 18:35:22 compute-0 ceph-osd[85953]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 23 18:35:22 compute-0 ceph-osd[85953]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 23 18:35:22 compute-0 ceph-osd[85953]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 23 18:35:22 compute-0 ceph-osd[85953]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 23 18:35:22 compute-0 ceph-osd[85953]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 23 18:35:22 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Jan 23 18:35:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 18:35:22 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 18:35:22 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:22 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:22 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 18:35:22 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 18:35:22 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:22 compute-0 ceph-mon[75097]: from='osd.0 [v2:192.168.122.100:6802/2499281510,v1:192.168.122.100:6803/2499281510]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 23 18:35:22 compute-0 ceph-mon[75097]: osdmap e7: 3 total, 0 up, 3 in
Jan 23 18:35:22 compute-0 ceph-mon[75097]: from='osd.0 [v2:192.168.122.100:6802/2499281510,v1:192.168.122.100:6803/2499281510]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 23 18:35:22 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:22 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:22 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:22 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2499281510; not ready for session (expect reconnect)
Jan 23 18:35:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 18:35:22 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:22 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 18:35:22 compute-0 systemd[1]: Starting Ceph osd.1 for 4da2adac-d096-5ead-9ec5-3108259ba9e4...
Jan 23 18:35:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:22 compute-0 podman[86717]: 2026-01-23 18:35:22.985010158 +0000 UTC m=+0.064221786 container create 790b4b3bbaa48bbcb866044c8e711ccf3f6c580958fc9e8ee3b2345eb8b8a1d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:35:23 compute-0 podman[86717]: 2026-01-23 18:35:22.944346715 +0000 UTC m=+0.023558353 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:23 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22e101a9af434d0d14d7113abf0e12a31b3d950f67307d0be570ed3ee27e2ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22e101a9af434d0d14d7113abf0e12a31b3d950f67307d0be570ed3ee27e2ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22e101a9af434d0d14d7113abf0e12a31b3d950f67307d0be570ed3ee27e2ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22e101a9af434d0d14d7113abf0e12a31b3d950f67307d0be570ed3ee27e2ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22e101a9af434d0d14d7113abf0e12a31b3d950f67307d0be570ed3ee27e2ca/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:23 compute-0 podman[86717]: 2026-01-23 18:35:23.113729644 +0000 UTC m=+0.192941312 container init 790b4b3bbaa48bbcb866044c8e711ccf3f6c580958fc9e8ee3b2345eb8b8a1d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 18:35:23 compute-0 podman[86717]: 2026-01-23 18:35:23.119421504 +0000 UTC m=+0.198633132 container start 790b4b3bbaa48bbcb866044c8e711ccf3f6c580958fc9e8ee3b2345eb8b8a1d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:35:23 compute-0 podman[86717]: 2026-01-23 18:35:23.151444809 +0000 UTC m=+0.230656437 container attach 790b4b3bbaa48bbcb866044c8e711ccf3f6c580958fc9e8ee3b2345eb8b8a1d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 23 18:35:23 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate[86732]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:23 compute-0 bash[86717]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:23 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate[86732]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:23 compute-0 bash[86717]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:23 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2499281510; not ready for session (expect reconnect)
Jan 23 18:35:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 18:35:23 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:23 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 18:35:23 compute-0 ceph-mon[75097]: from='osd.0 [v2:192.168.122.100:6802/2499281510,v1:192.168.122.100:6803/2499281510]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 23 18:35:23 compute-0 ceph-mon[75097]: osdmap e8: 3 total, 0 up, 3 in
Jan 23 18:35:23 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:23 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:23 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:23 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:23 compute-0 ceph-mon[75097]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:23 compute-0 lvm[86819]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:35:23 compute-0 lvm[86819]: VG ceph_vg1 finished
Jan 23 18:35:23 compute-0 lvm[86818]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:35:23 compute-0 lvm[86818]: VG ceph_vg0 finished
Jan 23 18:35:23 compute-0 lvm[86821]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:35:23 compute-0 lvm[86821]: VG ceph_vg2 finished
Jan 23 18:35:24 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:35:24 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2499281510; not ready for session (expect reconnect)
Jan 23 18:35:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 18:35:24 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:24 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 18:35:24 compute-0 ceph-mon[75097]: purged_snaps scrub starts
Jan 23 18:35:24 compute-0 ceph-mon[75097]: purged_snaps scrub ok
Jan 23 18:35:24 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:24 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:24 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate[86732]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 23 18:35:24 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate[86732]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:24 compute-0 bash[86717]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 23 18:35:24 compute-0 bash[86717]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:24 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate[86732]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:24 compute-0 bash[86717]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:24 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate[86732]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 23 18:35:24 compute-0 bash[86717]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 23 18:35:24 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate[86732]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 23 18:35:24 compute-0 bash[86717]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 23 18:35:24 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate[86732]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:24 compute-0 bash[86717]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:25 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate[86732]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:25 compute-0 bash[86717]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:25 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate[86732]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 23 18:35:25 compute-0 bash[86717]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 23 18:35:25 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate[86732]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 23 18:35:25 compute-0 bash[86717]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 23 18:35:25 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate[86732]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 23 18:35:25 compute-0 bash[86717]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 23 18:35:25 compute-0 systemd[1]: libpod-790b4b3bbaa48bbcb866044c8e711ccf3f6c580958fc9e8ee3b2345eb8b8a1d2.scope: Deactivated successfully.
Jan 23 18:35:25 compute-0 systemd[1]: libpod-790b4b3bbaa48bbcb866044c8e711ccf3f6c580958fc9e8ee3b2345eb8b8a1d2.scope: Consumed 1.505s CPU time.
Jan 23 18:35:25 compute-0 podman[86717]: 2026-01-23 18:35:25.055049146 +0000 UTC m=+2.134260774 container died 790b4b3bbaa48bbcb866044c8e711ccf3f6c580958fc9e8ee3b2345eb8b8a1d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 18:35:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c22e101a9af434d0d14d7113abf0e12a31b3d950f67307d0be570ed3ee27e2ca-merged.mount: Deactivated successfully.
Jan 23 18:35:25 compute-0 podman[86717]: 2026-01-23 18:35:25.254257663 +0000 UTC m=+2.333469281 container remove 790b4b3bbaa48bbcb866044c8e711ccf3f6c580958fc9e8ee3b2345eb8b8a1d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1-activate, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:35:25 compute-0 podman[86989]: 2026-01-23 18:35:25.464033538 +0000 UTC m=+0.049781055 container create aff29668a1c70136c91e84c2cf081d06899a975bf8699df854069436b25b3f47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 18:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7091120bb0e152f18dacd16790342239c7bd7bc7eb1b27eb5ecac0014406849/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7091120bb0e152f18dacd16790342239c7bd7bc7eb1b27eb5ecac0014406849/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7091120bb0e152f18dacd16790342239c7bd7bc7eb1b27eb5ecac0014406849/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:25 compute-0 podman[86989]: 2026-01-23 18:35:25.447046359 +0000 UTC m=+0.032793946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7091120bb0e152f18dacd16790342239c7bd7bc7eb1b27eb5ecac0014406849/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7091120bb0e152f18dacd16790342239c7bd7bc7eb1b27eb5ecac0014406849/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:25 compute-0 podman[86989]: 2026-01-23 18:35:25.57668984 +0000 UTC m=+0.162437347 container init aff29668a1c70136c91e84c2cf081d06899a975bf8699df854069436b25b3f47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:25 compute-0 podman[86989]: 2026-01-23 18:35:25.587115075 +0000 UTC m=+0.172862582 container start aff29668a1c70136c91e84c2cf081d06899a975bf8699df854069436b25b3f47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:35:25 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2499281510; not ready for session (expect reconnect)
Jan 23 18:35:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 18:35:25 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:25 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 18:35:25 compute-0 bash[86989]: aff29668a1c70136c91e84c2cf081d06899a975bf8699df854069436b25b3f47
Jan 23 18:35:25 compute-0 systemd[1]: Started Ceph osd.1 for 4da2adac-d096-5ead-9ec5-3108259ba9e4.
Jan 23 18:35:25 compute-0 ceph-osd[87009]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 18:35:25 compute-0 ceph-osd[87009]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 23 18:35:25 compute-0 ceph-osd[87009]: pidfile_write: ignore empty --pid-file
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 18:35:25 compute-0 sudo[85996]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 18:35:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 18:35:25 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 18:35:25 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92400 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 18:35:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 23 18:35:25 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 23 18:35:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:35:25 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:25 compute-0 ceph-mon[75097]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:25 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Jan 23 18:35:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:25 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c92000 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 18:35:25 compute-0 ceph-osd[87009]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 23 18:35:25 compute-0 ceph-osd[87009]: load: jerasure load: lrc 
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 18:35:25 compute-0 sudo[87033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 18:35:25 compute-0 sudo[87033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:25 compute-0 sudo[87033]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:25 compute-0 ceph-osd[87009]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 23 18:35:25 compute-0 ceph-osd[87009]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 18:35:25 compute-0 sudo[87068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:35:25 compute-0 sudo[87068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559448c93c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559449929800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559449929800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559449929800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bdev(0x559449929800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bluefs mount
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bluefs mount shared_bdev_used = 0
Jan 23 18:35:25 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: RocksDB version: 7.9.2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Git sha 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: DB SUMMARY
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: DB Session ID:  IS8GIDQKYAQYZIA6MWCB
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: CURRENT file:  CURRENT
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: IDENTITY file:  IDENTITY
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                         Options.error_if_exists: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.create_if_missing: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                         Options.paranoid_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                                     Options.env: 0x559448b23ea0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                                Options.info_log: 0x559449b748a0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_file_opening_threads: 16
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                              Options.statistics: (nil)
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.use_fsync: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.max_log_file_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                         Options.allow_fallocate: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.use_direct_reads: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.create_missing_column_families: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                              Options.db_log_dir: 
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                                 Options.wal_dir: db.wal
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.advise_random_on_open: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.write_buffer_manager: 0x559448b88b40
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                            Options.rate_limiter: (nil)
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.unordered_write: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.row_cache: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                              Options.wal_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.allow_ingest_behind: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.two_write_queues: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.manual_wal_flush: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.wal_compression: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.atomic_flush: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.log_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.allow_data_in_errors: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.db_host_id: __hostname__
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.max_background_jobs: 4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.max_background_compactions: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.max_subcompactions: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.max_open_files: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.bytes_per_sync: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.max_background_flushes: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Compression algorithms supported:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         kZSTD supported: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         kXpressCompression supported: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         kBZip2Compression supported: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         kLZ4Compression supported: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         kZlibCompression supported: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         kLZ4HCCompression supported: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         kSnappyCompression supported: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b74c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b278d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b74c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b278d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b74c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b278d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b74c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b278d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b74c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b278d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b74c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b278d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b74c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b278d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b74c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b27a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b74c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b27a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b74c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b27a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 226e5e49-e32f-4d5b-a0b9-1d03fb7b77bc
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193326034034, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193326036812, "job": 1, "event": "recovery_finished"}
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: freelist init
Jan 23 18:35:26 compute-0 ceph-osd[87009]: freelist _read_cfg
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluefs umount
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bdev(0x559449929800 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bdev(0x559449929800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bdev(0x559449929800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bdev(0x559449929800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bdev(0x559449929800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluefs mount
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluefs mount shared_bdev_used = 27262976
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: RocksDB version: 7.9.2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Git sha 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: DB SUMMARY
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: DB Session ID:  IS8GIDQKYAQYZIA6MWCA
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: CURRENT file:  CURRENT
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: IDENTITY file:  IDENTITY
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                         Options.error_if_exists: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.create_if_missing: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                         Options.paranoid_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                                     Options.env: 0x559449d44a80
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                                Options.info_log: 0x559449b74960
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_file_opening_threads: 16
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                              Options.statistics: (nil)
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.use_fsync: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.max_log_file_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                         Options.allow_fallocate: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.use_direct_reads: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.create_missing_column_families: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                              Options.db_log_dir: 
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                                 Options.wal_dir: db.wal
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.advise_random_on_open: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.write_buffer_manager: 0x559448b89900
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                            Options.rate_limiter: (nil)
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.unordered_write: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.row_cache: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                              Options.wal_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.allow_ingest_behind: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.two_write_queues: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.manual_wal_flush: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.wal_compression: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.atomic_flush: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.log_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.allow_data_in_errors: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.db_host_id: __hostname__
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.max_background_jobs: 4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.max_background_compactions: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.max_subcompactions: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.max_open_files: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.bytes_per_sync: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.max_background_flushes: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Compression algorithms supported:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         kZSTD supported: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         kXpressCompression supported: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         kBZip2Compression supported: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         kLZ4Compression supported: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         kZlibCompression supported: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         kLZ4HCCompression supported: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         kSnappyCompression supported: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b74bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b278d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b74bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b278d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b74bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b278d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b74bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b278d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b74bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b278d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b74bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b278d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b74bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b278d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b750c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b27a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b750c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b27a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559449b750c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559448b27a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 226e5e49-e32f-4d5b-a0b9-1d03fb7b77bc
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193326089659, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193326093535, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193326, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "226e5e49-e32f-4d5b-a0b9-1d03fb7b77bc", "db_session_id": "IS8GIDQKYAQYZIA6MWCA", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193326122768, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193326, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "226e5e49-e32f-4d5b-a0b9-1d03fb7b77bc", "db_session_id": "IS8GIDQKYAQYZIA6MWCA", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193326129461, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193326, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "226e5e49-e32f-4d5b-a0b9-1d03fb7b77bc", "db_session_id": "IS8GIDQKYAQYZIA6MWCA", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193326165621, "job": 1, "event": "recovery_finished"}
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559449d8e000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: DB pointer 0x559449d2e000
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 23 18:35:26 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 18:35:26 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b27a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b27a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b27a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 18:35:26 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 23 18:35:26 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 23 18:35:26 compute-0 ceph-osd[87009]: _get_class not permitted to load lua
Jan 23 18:35:26 compute-0 ceph-osd[87009]: _get_class not permitted to load sdk
Jan 23 18:35:26 compute-0 ceph-osd[87009]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 23 18:35:26 compute-0 ceph-osd[87009]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 23 18:35:26 compute-0 ceph-osd[87009]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 23 18:35:26 compute-0 ceph-osd[87009]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 23 18:35:26 compute-0 ceph-osd[87009]: osd.1 0 load_pgs
Jan 23 18:35:26 compute-0 ceph-osd[87009]: osd.1 0 load_pgs opened 0 pgs
Jan 23 18:35:26 compute-0 ceph-osd[87009]: osd.1 0 log_to_monitors true
Jan 23 18:35:26 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1[87005]: 2026-01-23T18:35:26.269+0000 7f17a96458c0 -1 osd.1 0 log_to_monitors true
Jan 23 18:35:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 23 18:35:26 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2862543050,v1:192.168.122.100:6807/2862543050]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 23 18:35:26 compute-0 podman[87549]: 2026-01-23 18:35:26.388194172 +0000 UTC m=+0.071306152 container create f61cff0e6bd75731297cbd3b9450a1e62a139529c3cad800b94cd95526b11125 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_goldberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 18:35:26 compute-0 podman[87549]: 2026-01-23 18:35:26.344820008 +0000 UTC m=+0.027931998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:26 compute-0 systemd[1]: Started libpod-conmon-f61cff0e6bd75731297cbd3b9450a1e62a139529c3cad800b94cd95526b11125.scope.
Jan 23 18:35:26 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:26 compute-0 podman[87549]: 2026-01-23 18:35:26.511911046 +0000 UTC m=+0.195023116 container init f61cff0e6bd75731297cbd3b9450a1e62a139529c3cad800b94cd95526b11125 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 23 18:35:26 compute-0 podman[87549]: 2026-01-23 18:35:26.519389713 +0000 UTC m=+0.202501693 container start f61cff0e6bd75731297cbd3b9450a1e62a139529c3cad800b94cd95526b11125 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_goldberg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 23 18:35:26 compute-0 epic_goldberg[87565]: 167 167
Jan 23 18:35:26 compute-0 systemd[1]: libpod-f61cff0e6bd75731297cbd3b9450a1e62a139529c3cad800b94cd95526b11125.scope: Deactivated successfully.
Jan 23 18:35:26 compute-0 podman[87549]: 2026-01-23 18:35:26.542471823 +0000 UTC m=+0.225583803 container attach f61cff0e6bd75731297cbd3b9450a1e62a139529c3cad800b94cd95526b11125 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_goldberg, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:35:26 compute-0 podman[87549]: 2026-01-23 18:35:26.542879313 +0000 UTC m=+0.225991303 container died f61cff0e6bd75731297cbd3b9450a1e62a139529c3cad800b94cd95526b11125 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_goldberg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 18:35:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-33c5e41ddb796e474e729177067f78db20bf78b14d0458c5322be350378972e7-merged.mount: Deactivated successfully.
Jan 23 18:35:26 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2499281510; not ready for session (expect reconnect)
Jan 23 18:35:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 18:35:26 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:26 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 18:35:26 compute-0 podman[87549]: 2026-01-23 18:35:26.671883807 +0000 UTC m=+0.354995817 container remove f61cff0e6bd75731297cbd3b9450a1e62a139529c3cad800b94cd95526b11125 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 18:35:26 compute-0 systemd[1]: libpod-conmon-f61cff0e6bd75731297cbd3b9450a1e62a139529c3cad800b94cd95526b11125.scope: Deactivated successfully.
Jan 23 18:35:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:35:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 23 18:35:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 18:35:26 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2862543050,v1:192.168.122.100:6807/2862543050]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 23 18:35:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Jan 23 18:35:26 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Jan 23 18:35:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 23 18:35:26 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2862543050,v1:192.168.122.100:6807/2862543050]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 23 18:35:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 23 18:35:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 18:35:26 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 18:35:26 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:26 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:26 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 18:35:26 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:26 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 18:35:26 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:26 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 23 18:35:26 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:26 compute-0 ceph-mon[75097]: Deploying daemon osd.2 on compute-0
Jan 23 18:35:26 compute-0 ceph-mon[75097]: from='osd.1 [v2:192.168.122.100:6806/2862543050,v1:192.168.122.100:6807/2862543050]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 23 18:35:26 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:26 compute-0 podman[87595]: 2026-01-23 18:35:26.984942758 +0000 UTC m=+0.044230158 container create e16c1a748ffd96f9567849086a99b2a9c00ef6cf84de1ea0c64527716f176470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 18:35:27 compute-0 systemd[1]: Started libpod-conmon-e16c1a748ffd96f9567849086a99b2a9c00ef6cf84de1ea0c64527716f176470.scope.
Jan 23 18:35:27 compute-0 podman[87595]: 2026-01-23 18:35:26.960103422 +0000 UTC m=+0.019390842 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:27 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290113fa93b7303d365a93d7532a8fca83c03ee2520ac300070aee4d496faa67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290113fa93b7303d365a93d7532a8fca83c03ee2520ac300070aee4d496faa67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290113fa93b7303d365a93d7532a8fca83c03ee2520ac300070aee4d496faa67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290113fa93b7303d365a93d7532a8fca83c03ee2520ac300070aee4d496faa67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290113fa93b7303d365a93d7532a8fca83c03ee2520ac300070aee4d496faa67/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:27 compute-0 podman[87595]: 2026-01-23 18:35:27.105676203 +0000 UTC m=+0.164963663 container init e16c1a748ffd96f9567849086a99b2a9c00ef6cf84de1ea0c64527716f176470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 23 18:35:27 compute-0 podman[87595]: 2026-01-23 18:35:27.11729774 +0000 UTC m=+0.176585140 container start e16c1a748ffd96f9567849086a99b2a9c00ef6cf84de1ea0c64527716f176470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:35:27 compute-0 podman[87595]: 2026-01-23 18:35:27.121672235 +0000 UTC m=+0.180959655 container attach e16c1a748ffd96f9567849086a99b2a9c00ef6cf84de1ea0c64527716f176470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 18:35:27 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 23 18:35:27 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 23 18:35:27 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate-test[87611]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 23 18:35:27 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate-test[87611]:                             [--no-systemd] [--no-tmpfs]
Jan 23 18:35:27 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate-test[87611]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 23 18:35:27 compute-0 systemd[1]: libpod-e16c1a748ffd96f9567849086a99b2a9c00ef6cf84de1ea0c64527716f176470.scope: Deactivated successfully.
Jan 23 18:35:27 compute-0 podman[87595]: 2026-01-23 18:35:27.311781011 +0000 UTC m=+0.371068421 container died e16c1a748ffd96f9567849086a99b2a9c00ef6cf84de1ea0c64527716f176470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate-test, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 23 18:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-290113fa93b7303d365a93d7532a8fca83c03ee2520ac300070aee4d496faa67-merged.mount: Deactivated successfully.
Jan 23 18:35:27 compute-0 podman[87595]: 2026-01-23 18:35:27.425466931 +0000 UTC m=+0.484754321 container remove e16c1a748ffd96f9567849086a99b2a9c00ef6cf84de1ea0c64527716f176470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:35:27 compute-0 systemd[1]: libpod-conmon-e16c1a748ffd96f9567849086a99b2a9c00ef6cf84de1ea0c64527716f176470.scope: Deactivated successfully.
Jan 23 18:35:27 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2499281510; not ready for session (expect reconnect)
Jan 23 18:35:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 18:35:27 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:27 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 18:35:27 compute-0 systemd[1]: Reloading.
Jan 23 18:35:27 compute-0 systemd-rc-local-generator[87675]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:35:27 compute-0 systemd-sysv-generator[87680]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:35:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 23 18:35:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 18:35:27 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2862543050,v1:192.168.122.100:6807/2862543050]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 23 18:35:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e10 e10: 3 total, 0 up, 3 in
Jan 23 18:35:27 compute-0 ceph-osd[87009]: osd.1 0 done with init, starting boot process
Jan 23 18:35:27 compute-0 ceph-osd[87009]: osd.1 0 start_boot
Jan 23 18:35:27 compute-0 ceph-osd[87009]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 23 18:35:27 compute-0 ceph-osd[87009]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 23 18:35:27 compute-0 ceph-osd[87009]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 23 18:35:27 compute-0 ceph-osd[87009]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 23 18:35:27 compute-0 ceph-osd[87009]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 23 18:35:27 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 0 up, 3 in
Jan 23 18:35:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 18:35:27 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 18:35:27 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:27 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:27 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 18:35:27 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:27 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 18:35:27 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2862543050; not ready for session (expect reconnect)
Jan 23 18:35:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 18:35:27 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:27 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 18:35:27 compute-0 ceph-mon[75097]: from='osd.1 [v2:192.168.122.100:6806/2862543050,v1:192.168.122.100:6807/2862543050]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 23 18:35:27 compute-0 ceph-mon[75097]: osdmap e9: 3 total, 0 up, 3 in
Jan 23 18:35:27 compute-0 ceph-mon[75097]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:27 compute-0 ceph-mon[75097]: from='osd.1 [v2:192.168.122.100:6806/2862543050,v1:192.168.122.100:6807/2862543050]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 23 18:35:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:27 compute-0 ceph-mon[75097]: from='osd.1 [v2:192.168.122.100:6806/2862543050,v1:192.168.122.100:6807/2862543050]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 23 18:35:27 compute-0 ceph-mon[75097]: osdmap e10: 3 total, 0 up, 3 in
Jan 23 18:35:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:28 compute-0 ceph-mgr[75390]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 23 18:35:28 compute-0 systemd[1]: Reloading.
Jan 23 18:35:28 compute-0 systemd-sysv-generator[87717]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:35:28 compute-0 systemd-rc-local-generator[87714]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:35:28 compute-0 ceph-osd[85953]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 18.211 iops: 4662.125 elapsed_sec: 0.643
Jan 23 18:35:28 compute-0 ceph-osd[85953]: log_channel(cluster) log [WRN] : OSD bench result of 4662.125450 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 23 18:35:28 compute-0 ceph-osd[85953]: osd.0 0 waiting for initial osdmap
Jan 23 18:35:28 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0[85949]: 2026-01-23T18:35:28.204+0000 7fcf14fc4640 -1 osd.0 0 waiting for initial osdmap
Jan 23 18:35:28 compute-0 ceph-osd[85953]: osd.0 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 23 18:35:28 compute-0 ceph-osd[85953]: osd.0 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 23 18:35:28 compute-0 ceph-osd[85953]: osd.0 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 23 18:35:28 compute-0 ceph-osd[85953]: osd.0 10 check_osdmap_features require_osd_release unknown -> tentacle
Jan 23 18:35:28 compute-0 ceph-osd[85953]: osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 23 18:35:28 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-0[85949]: 2026-01-23T18:35:28.252+0000 7fcf0fdc9640 -1 osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 23 18:35:28 compute-0 ceph-osd[85953]: osd.0 10 set_numa_affinity not setting numa affinity
Jan 23 18:35:28 compute-0 ceph-osd[85953]: osd.0 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 23 18:35:28 compute-0 systemd[1]: Starting Ceph osd.2 for 4da2adac-d096-5ead-9ec5-3108259ba9e4...
Jan 23 18:35:28 compute-0 podman[87771]: 2026-01-23 18:35:28.551448361 +0000 UTC m=+0.061400721 container create 7158dbe290de20efcbb1aca0f0793c5b40cc29c110cf3156da2b83c5cc485fa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 23 18:35:28 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2499281510; not ready for session (expect reconnect)
Jan 23 18:35:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 18:35:28 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:28 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 18:35:28 compute-0 podman[87771]: 2026-01-23 18:35:28.511029425 +0000 UTC m=+0.020981805 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:28 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c4013aa8f8015d15188a68099baf3c8a4a1cc27aa109c882046ced06887e14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c4013aa8f8015d15188a68099baf3c8a4a1cc27aa109c882046ced06887e14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c4013aa8f8015d15188a68099baf3c8a4a1cc27aa109c882046ced06887e14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c4013aa8f8015d15188a68099baf3c8a4a1cc27aa109c882046ced06887e14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c4013aa8f8015d15188a68099baf3c8a4a1cc27aa109c882046ced06887e14/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:28 compute-0 podman[87771]: 2026-01-23 18:35:28.67685906 +0000 UTC m=+0.186811430 container init 7158dbe290de20efcbb1aca0f0793c5b40cc29c110cf3156da2b83c5cc485fa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 18:35:28 compute-0 podman[87771]: 2026-01-23 18:35:28.683891455 +0000 UTC m=+0.193843855 container start 7158dbe290de20efcbb1aca0f0793c5b40cc29c110cf3156da2b83c5cc485fa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 18:35:28 compute-0 podman[87771]: 2026-01-23 18:35:28.725468563 +0000 UTC m=+0.235420923 container attach 7158dbe290de20efcbb1aca0f0793c5b40cc29c110cf3156da2b83c5cc485fa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:35:28
Jan 23 18:35:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:35:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:35:28 compute-0 ceph-mgr[75390]: [balancer INFO root] No pools available
Jan 23 18:35:28 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2862543050; not ready for session (expect reconnect)
Jan 23 18:35:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 18:35:28 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:28 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 18:35:28 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate[87786]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:28 compute-0 bash[87771]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:28 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate[87786]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:28 compute-0 bash[87771]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 23 18:35:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 18:35:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Jan 23 18:35:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:29 compute-0 ceph-osd[85953]: osd.0 11 state: booting -> active
Jan 23 18:35:29 compute-0 ceph-mon[75097]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2499281510,v1:192.168.122.100:6803/2499281510] boot
Jan 23 18:35:29 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Jan 23 18:35:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 18:35:29 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 18:35:29 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:29 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:29 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 18:35:29 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:29 compute-0 lvm[87869]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:35:29 compute-0 lvm[87869]: VG ceph_vg0 finished
Jan 23 18:35:29 compute-0 lvm[87872]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:35:29 compute-0 lvm[87872]: VG ceph_vg1 finished
Jan 23 18:35:29 compute-0 lvm[87874]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:35:29 compute-0 lvm[87874]: VG ceph_vg2 finished
Jan 23 18:35:29 compute-0 lvm[87875]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:35:29 compute-0 lvm[87875]: VG ceph_vg1 finished
Jan 23 18:35:29 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate[87786]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 23 18:35:29 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate[87786]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:29 compute-0 bash[87771]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 23 18:35:29 compute-0 bash[87771]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:29 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate[87786]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:29 compute-0 bash[87771]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 18:35:29 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate[87786]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 23 18:35:29 compute-0 bash[87771]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 23 18:35:29 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate[87786]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 23 18:35:29 compute-0 bash[87771]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 23 18:35:29 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate[87786]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:29 compute-0 bash[87771]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:29 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate[87786]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:29 compute-0 bash[87771]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:29 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate[87786]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 23 18:35:29 compute-0 bash[87771]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 23 18:35:29 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate[87786]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 23 18:35:29 compute-0 bash[87771]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 23 18:35:29 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate[87786]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 23 18:35:29 compute-0 bash[87771]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 23 18:35:29 compute-0 systemd[1]: libpod-7158dbe290de20efcbb1aca0f0793c5b40cc29c110cf3156da2b83c5cc485fa5.scope: Deactivated successfully.
Jan 23 18:35:29 compute-0 systemd[1]: libpod-7158dbe290de20efcbb1aca0f0793c5b40cc29c110cf3156da2b83c5cc485fa5.scope: Consumed 1.567s CPU time.
Jan 23 18:35:29 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2862543050; not ready for session (expect reconnect)
Jan 23 18:35:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 18:35:29 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:29 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 18:35:29 compute-0 podman[87989]: 2026-01-23 18:35:29.88134721 +0000 UTC m=+0.022993877 container died 7158dbe290de20efcbb1aca0f0793c5b40cc29c110cf3156da2b83c5cc485fa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 18:35:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4c4013aa8f8015d15188a68099baf3c8a4a1cc27aa109c882046ced06887e14-merged.mount: Deactivated successfully.
Jan 23 18:35:30 compute-0 ceph-mon[75097]: purged_snaps scrub starts
Jan 23 18:35:30 compute-0 ceph-mon[75097]: purged_snaps scrub ok
Jan 23 18:35:30 compute-0 ceph-mon[75097]: OSD bench result of 4662.125450 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 23 18:35:30 compute-0 ceph-mon[75097]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 18:35:30 compute-0 ceph-mon[75097]: osd.0 [v2:192.168.122.100:6802/2499281510,v1:192.168.122.100:6803/2499281510] boot
Jan 23 18:35:30 compute-0 ceph-mon[75097]: osdmap e11: 3 total, 1 up, 3 in
Jan 23 18:35:30 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 23 18:35:30 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:30 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:30 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:35:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:35:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:35:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:35:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:35:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:35:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:35:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:35:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:35:30 compute-0 ceph-mgr[75390]: [devicehealth INFO root] creating mgr pool
Jan 23 18:35:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 23 18:35:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 23 18:35:30 compute-0 podman[87989]: 2026-01-23 18:35:30.101185781 +0000 UTC m=+0.242832508 container remove 7158dbe290de20efcbb1aca0f0793c5b40cc29c110cf3156da2b83c5cc485fa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2-activate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 18:35:30 compute-0 podman[88053]: 2026-01-23 18:35:30.389868829 +0000 UTC m=+0.083926385 container create 23cb766d14d0e57a69ca64dccf7cf7ab6eb0a9009b51fcf0c3f8612caf8c0766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:35:30 compute-0 podman[88053]: 2026-01-23 18:35:30.333720078 +0000 UTC m=+0.027777654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff9def75732edcb96c8ba4a4d124c7848db87894d48f70de0ac81c0e50afd75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff9def75732edcb96c8ba4a4d124c7848db87894d48f70de0ac81c0e50afd75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff9def75732edcb96c8ba4a4d124c7848db87894d48f70de0ac81c0e50afd75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff9def75732edcb96c8ba4a4d124c7848db87894d48f70de0ac81c0e50afd75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff9def75732edcb96c8ba4a4d124c7848db87894d48f70de0ac81c0e50afd75/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:30 compute-0 podman[88053]: 2026-01-23 18:35:30.500913069 +0000 UTC m=+0.194970655 container init 23cb766d14d0e57a69ca64dccf7cf7ab6eb0a9009b51fcf0c3f8612caf8c0766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 23 18:35:30 compute-0 podman[88053]: 2026-01-23 18:35:30.509891095 +0000 UTC m=+0.203948651 container start 23cb766d14d0e57a69ca64dccf7cf7ab6eb0a9009b51fcf0c3f8612caf8c0766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 18:35:30 compute-0 bash[88053]: 23cb766d14d0e57a69ca64dccf7cf7ab6eb0a9009b51fcf0c3f8612caf8c0766
Jan 23 18:35:30 compute-0 systemd[1]: Started Ceph osd.2 for 4da2adac-d096-5ead-9ec5-3108259ba9e4.
Jan 23 18:35:30 compute-0 ceph-osd[88072]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 18:35:30 compute-0 ceph-osd[88072]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: pidfile_write: ignore empty --pid-file
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) close
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) close
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) close
Jan 23 18:35:30 compute-0 sudo[87068]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) close
Jan 23 18:35:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) close
Jan 23 18:35:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240400 /var/lib/ceph/osd/ceph-2/block) close
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372240000 /var/lib/ceph/osd/ceph-2/block) close
Jan 23 18:35:30 compute-0 sudo[88099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:35:30 compute-0 sudo[88144]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfngbliqzkhoqejjuazjxuxjzvokmnvn ; /usr/bin/python3'
Jan 23 18:35:30 compute-0 ceph-osd[88072]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 23 18:35:30 compute-0 sudo[88144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:35:30 compute-0 sudo[88099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:30 compute-0 sudo[88099]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:30 compute-0 ceph-osd[88072]: load: jerasure load: lrc 
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 23 18:35:30 compute-0 sudo[88155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:35:30 compute-0 ceph-osd[88072]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 23 18:35:30 compute-0 ceph-osd[88072]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 23 18:35:30 compute-0 sudo[88155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 23 18:35:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 23 18:35:30 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2862543050; not ready for session (expect reconnect)
Jan 23 18:35:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 18:35:30 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:30 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 23 18:35:30 compute-0 python3[88147]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372241c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372ee1800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372ee1800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372ee1800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372ee1800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluefs mount
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluefs mount shared_bdev_used = 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: RocksDB version: 7.9.2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Git sha 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: DB SUMMARY
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: DB Session ID:  NSYNQ1YA8AN73W97GZQO
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: CURRENT file:  CURRENT
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: IDENTITY file:  IDENTITY
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                         Options.error_if_exists: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                       Options.create_if_missing: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                         Options.paranoid_checks: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                                     Options.env: 0x5643720d1ea0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                                Options.info_log: 0x56437312c8a0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.max_file_opening_threads: 16
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                              Options.statistics: (nil)
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                               Options.use_fsync: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                       Options.max_log_file_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                         Options.allow_fallocate: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.use_direct_reads: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.create_missing_column_families: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                              Options.db_log_dir: 
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                                 Options.wal_dir: db.wal
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.advise_random_on_open: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.write_buffer_manager: 0x564372fd2b40
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                            Options.rate_limiter: (nil)
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.unordered_write: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                               Options.row_cache: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                              Options.wal_filter: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.allow_ingest_behind: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.two_write_queues: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.manual_wal_flush: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.wal_compression: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.atomic_flush: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                 Options.log_readahead_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.allow_data_in_errors: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.db_host_id: __hostname__
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.max_background_jobs: 4
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.max_background_compactions: -1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.max_subcompactions: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.max_open_files: -1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.bytes_per_sync: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.max_background_flushes: -1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Compression algorithms supported:
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         kZSTD supported: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         kXpressCompression supported: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         kBZip2Compression supported: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         kLZ4Compression supported: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         kZlibCompression supported: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         kLZ4HCCompression supported: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         kSnappyCompression supported: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56437312cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d58d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56437312cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d58d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56437312cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d58d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56437312cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d58d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56437312cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d58d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56437312cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d58d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56437312cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d58d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56437312cc80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d5a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56437312cc80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d5a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56437312cc80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d5a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bbd0a792-f3e3-42fc-b5e0-59838243f2d9
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193330985519, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193330987671, "job": 1, "event": "recovery_finished"}
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: freelist init
Jan 23 18:35:30 compute-0 ceph-osd[88072]: freelist _read_cfg
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 23 18:35:30 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bluefs umount
Jan 23 18:35:30 compute-0 ceph-osd[88072]: bdev(0x564372ee1800 /var/lib/ceph/osd/ceph-2/block) close
Jan 23 18:35:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 23 18:35:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 18:35:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 23 18:35:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:31 compute-0 ceph-osd[88072]: bdev(0x564372ee1800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 23 18:35:31 compute-0 ceph-osd[88072]: bdev(0x564372ee1800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 23 18:35:31 compute-0 ceph-osd[88072]: bdev(0x564372ee1800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 18:35:31 compute-0 ceph-osd[88072]: bdev(0x564372ee1800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 18:35:31 compute-0 ceph-osd[88072]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 23 18:35:31 compute-0 ceph-osd[88072]: bluefs mount
Jan 23 18:35:31 compute-0 ceph-osd[88072]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 23 18:35:31 compute-0 podman[88200]: 2026-01-23 18:35:31.025221392 +0000 UTC m=+0.080635468 container create 5514dbbd25f8c40b2b84961d6747f37b444d5733c52698c25b358de2306b3d2f (image=quay.io/ceph/ceph:v20, name=elegant_mendeleev, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 18:35:31 compute-0 ceph-osd[88072]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:31 compute-0 ceph-osd[88072]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:31 compute-0 ceph-osd[88072]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:31 compute-0 ceph-osd[88072]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:31 compute-0 ceph-osd[88072]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 18:35:31 compute-0 ceph-osd[88072]: bluefs mount shared_bdev_used = 27262976
Jan 23 18:35:31 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: RocksDB version: 7.9.2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Git sha 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: DB SUMMARY
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: DB Session ID:  NSYNQ1YA8AN73W97GZQP
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: CURRENT file:  CURRENT
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: IDENTITY file:  IDENTITY
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                         Options.error_if_exists: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                       Options.create_if_missing: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                         Options.paranoid_checks: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                                     Options.env: 0x5643720d1d50
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                                Options.info_log: 0x56437312db00
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.max_file_opening_threads: 16
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                              Options.statistics: (nil)
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                               Options.use_fsync: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                       Options.max_log_file_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                         Options.allow_fallocate: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.use_direct_reads: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.create_missing_column_families: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                              Options.db_log_dir: 
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                                 Options.wal_dir: db.wal
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.advise_random_on_open: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.write_buffer_manager: 0x564372fd3900
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                            Options.rate_limiter: (nil)
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.unordered_write: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                               Options.row_cache: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                              Options.wal_filter: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.allow_ingest_behind: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.two_write_queues: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.manual_wal_flush: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.wal_compression: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.atomic_flush: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                 Options.log_readahead_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.allow_data_in_errors: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.db_host_id: __hostname__
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.max_background_jobs: 4
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.max_background_compactions: -1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.max_subcompactions: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.max_open_files: -1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.bytes_per_sync: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.max_background_flushes: -1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Compression algorithms supported:
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         kZSTD supported: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         kXpressCompression supported: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         kBZip2Compression supported: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         kLZ4Compression supported: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         kZlibCompression supported: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         kLZ4HCCompression supported: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         kSnappyCompression supported: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564373166220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d5a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564373166220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d5a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564373166220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d5a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564373166220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d5a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564373166220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d5a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564373166220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d5a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564373166220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d5a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564373166300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d54b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564373166300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d54b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:           Options.merge_operator: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 18:35:31 compute-0 podman[88200]: 2026-01-23 18:35:30.986360217 +0000 UTC m=+0.041774313 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564373166300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5643720d54b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.compression: LZ4
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.num_levels: 7
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.bloom_locality: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                               Options.ttl: 2592000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                       Options.enable_blob_files: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                           Options.min_blob_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bbd0a792-f3e3-42fc-b5e0-59838243f2d9
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193331067577, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 23 18:35:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 23 18:35:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Jan 23 18:35:31 compute-0 systemd[1]: Started libpod-conmon-5514dbbd25f8c40b2b84961d6747f37b444d5733c52698c25b358de2306b3d2f.scope.
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193331100422, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193331, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bbd0a792-f3e3-42fc-b5e0-59838243f2d9", "db_session_id": "NSYNQ1YA8AN73W97GZQP", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:35:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Jan 23 18:35:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 23 18:35:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 23 18:35:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 23 18:35:31 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Jan 23 18:35:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 18:35:31 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:31 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:31 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 18:35:31 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 23 18:35:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193331114834, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193331, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bbd0a792-f3e3-42fc-b5e0-59838243f2d9", "db_session_id": "NSYNQ1YA8AN73W97GZQP", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:35:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:31 compute-0 ceph-osd[85953]: osd.0 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 23 18:35:31 compute-0 ceph-osd[85953]: osd.0 12 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 23 18:35:31 compute-0 ceph-osd[85953]: osd.0 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 23 18:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf57d681e543600c37fa8fe6802ccb87c2bfaab9f9977afe8b36d14ede105cc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf57d681e543600c37fa8fe6802ccb87c2bfaab9f9977afe8b36d14ede105cc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf57d681e543600c37fa8fe6802ccb87c2bfaab9f9977afe8b36d14ede105cc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193331141495, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193331, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bbd0a792-f3e3-42fc-b5e0-59838243f2d9", "db_session_id": "NSYNQ1YA8AN73W97GZQP", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193331173895, "job": 1, "event": "recovery_finished"}
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 23 18:35:31 compute-0 podman[88200]: 2026-01-23 18:35:31.180024007 +0000 UTC m=+0.235438113 container init 5514dbbd25f8c40b2b84961d6747f37b444d5733c52698c25b358de2306b3d2f (image=quay.io/ceph/ceph:v20, name=elegant_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 23 18:35:31 compute-0 podman[88200]: 2026-01-23 18:35:31.187507744 +0000 UTC m=+0.242921820 container start 5514dbbd25f8c40b2b84961d6747f37b444d5733c52698c25b358de2306b3d2f (image=quay.io/ceph/ceph:v20, name=elegant_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:31 compute-0 podman[88598]: 2026-01-23 18:35:31.209904305 +0000 UTC m=+0.092425280 container create 5752a287747558624f1f19d4e0827e7495cb60962cf20392288b850963598ba1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Jan 23 18:35:31 compute-0 podman[88598]: 2026-01-23 18:35:31.139694523 +0000 UTC m=+0.022215528 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:31 compute-0 systemd[1]: Started libpod-conmon-5752a287747558624f1f19d4e0827e7495cb60962cf20392288b850963598ba1.scope.
Jan 23 18:35:31 compute-0 podman[88200]: 2026-01-23 18:35:31.304361038 +0000 UTC m=+0.359775114 container attach 5514dbbd25f8c40b2b84961d6747f37b444d5733c52698c25b358de2306b3d2f (image=quay.io/ceph/ceph:v20, name=elegant_mendeleev, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56437312fc00
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: DB pointer 0x5643732e6000
Jan 23 18:35:31 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 23 18:35:31 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 23 18:35:31 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 18:35:31 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d54b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d54b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d54b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 18:35:31 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 23 18:35:31 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 23 18:35:31 compute-0 ceph-osd[88072]: _get_class not permitted to load lua
Jan 23 18:35:31 compute-0 ceph-osd[88072]: _get_class not permitted to load sdk
Jan 23 18:35:31 compute-0 ceph-osd[88072]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 23 18:35:31 compute-0 ceph-osd[88072]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 23 18:35:31 compute-0 ceph-osd[88072]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 23 18:35:31 compute-0 ceph-osd[88072]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 23 18:35:31 compute-0 ceph-osd[88072]: osd.2 0 load_pgs
Jan 23 18:35:31 compute-0 ceph-osd[88072]: osd.2 0 load_pgs opened 0 pgs
Jan 23 18:35:31 compute-0 ceph-osd[88072]: osd.2 0 log_to_monitors true
Jan 23 18:35:31 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2[88068]: 2026-01-23T18:35:31.316+0000 7f42e78018c0 -1 osd.2 0 log_to_monitors true
Jan 23 18:35:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 23 18:35:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3948518255,v1:192.168.122.100:6811/3948518255]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 23 18:35:31 compute-0 podman[88598]: 2026-01-23 18:35:31.345867592 +0000 UTC m=+0.228388567 container init 5752a287747558624f1f19d4e0827e7495cb60962cf20392288b850963598ba1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 18:35:31 compute-0 podman[88598]: 2026-01-23 18:35:31.35183217 +0000 UTC m=+0.234353145 container start 5752a287747558624f1f19d4e0827e7495cb60962cf20392288b850963598ba1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 18:35:31 compute-0 goofy_mclean[88618]: 167 167
Jan 23 18:35:31 compute-0 systemd[1]: libpod-5752a287747558624f1f19d4e0827e7495cb60962cf20392288b850963598ba1.scope: Deactivated successfully.
Jan 23 18:35:31 compute-0 podman[88598]: 2026-01-23 18:35:31.394113126 +0000 UTC m=+0.276634121 container attach 5752a287747558624f1f19d4e0827e7495cb60962cf20392288b850963598ba1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:35:31 compute-0 podman[88598]: 2026-01-23 18:35:31.394842345 +0000 UTC m=+0.277363330 container died 5752a287747558624f1f19d4e0827e7495cb60962cf20392288b850963598ba1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:35:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd4568dd57ca985e2c380db319720b3a70f4f4631adf4bd524099adaa7c81ef7-merged.mount: Deactivated successfully.
Jan 23 18:35:31 compute-0 podman[88598]: 2026-01-23 18:35:31.557877817 +0000 UTC m=+0.440398802 container remove 5752a287747558624f1f19d4e0827e7495cb60962cf20392288b850963598ba1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:31 compute-0 systemd[1]: libpod-conmon-5752a287747558624f1f19d4e0827e7495cb60962cf20392288b850963598ba1.scope: Deactivated successfully.
Jan 23 18:35:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 23 18:35:31 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/760103503' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 23 18:35:31 compute-0 elegant_mendeleev[88596]: 
Jan 23 18:35:31 compute-0 elegant_mendeleev[88596]: {"fsid":"4da2adac-d096-5ead-9ec5-3108259ba9e4","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":84,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":12,"num_osds":3,"num_up_osds":1,"osd_up_since":1769193328,"num_in_osds":3,"osd_in_since":1769193312,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-23T18:34:04:461111+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-23T18:34:04.464417+0000","services":{}},"progress_events":{}}
Jan 23 18:35:31 compute-0 systemd[1]: libpod-5514dbbd25f8c40b2b84961d6747f37b444d5733c52698c25b358de2306b3d2f.scope: Deactivated successfully.
Jan 23 18:35:31 compute-0 podman[88200]: 2026-01-23 18:35:31.732172215 +0000 UTC m=+0.787586291 container died 5514dbbd25f8c40b2b84961d6747f37b444d5733c52698c25b358de2306b3d2f (image=quay.io/ceph/ceph:v20, name=elegant_mendeleev, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:35:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-caf57d681e543600c37fa8fe6802ccb87c2bfaab9f9977afe8b36d14ede105cc-merged.mount: Deactivated successfully.
Jan 23 18:35:31 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2862543050; not ready for session (expect reconnect)
Jan 23 18:35:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 18:35:31 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:31 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 18:35:31 compute-0 podman[88694]: 2026-01-23 18:35:31.798433684 +0000 UTC m=+0.089582824 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:31 compute-0 podman[88200]: 2026-01-23 18:35:31.904514903 +0000 UTC m=+0.959929009 container remove 5514dbbd25f8c40b2b84961d6747f37b444d5733c52698c25b358de2306b3d2f (image=quay.io/ceph/ceph:v20, name=elegant_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 18:35:31 compute-0 systemd[1]: libpod-conmon-5514dbbd25f8c40b2b84961d6747f37b444d5733c52698c25b358de2306b3d2f.scope: Deactivated successfully.
Jan 23 18:35:31 compute-0 sudo[88144]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:31 compute-0 podman[88694]: 2026-01-23 18:35:31.945171035 +0000 UTC m=+0.236320105 container create 8e9fe14690d9994d4a166583c931e41ebe44941d19dc7f9c2e01011b5c68403b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:35:32 compute-0 systemd[1]: Started libpod-conmon-8e9fe14690d9994d4a166583c931e41ebe44941d19dc7f9c2e01011b5c68403b.scope.
Jan 23 18:35:32 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49e96b70b8ede41a0fdde7b861e3d489720d155cde1ed2188f8a3bc542736c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49e96b70b8ede41a0fdde7b861e3d489720d155cde1ed2188f8a3bc542736c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49e96b70b8ede41a0fdde7b861e3d489720d155cde1ed2188f8a3bc542736c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49e96b70b8ede41a0fdde7b861e3d489720d155cde1ed2188f8a3bc542736c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:32 compute-0 podman[88694]: 2026-01-23 18:35:32.08897041 +0000 UTC m=+0.380119470 container init 8e9fe14690d9994d4a166583c931e41ebe44941d19dc7f9c2e01011b5c68403b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lumiere, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 23 18:35:32 compute-0 podman[88694]: 2026-01-23 18:35:32.097342051 +0000 UTC m=+0.388491141 container start 8e9fe14690d9994d4a166583c931e41ebe44941d19dc7f9c2e01011b5c68403b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 23 18:35:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 23 18:35:32 compute-0 podman[88694]: 2026-01-23 18:35:32.109997064 +0000 UTC m=+0.401146144 container attach 8e9fe14690d9994d4a166583c931e41ebe44941d19dc7f9c2e01011b5c68403b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lumiere, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:35:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 23 18:35:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3948518255,v1:192.168.122.100:6811/3948518255]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 23 18:35:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Jan 23 18:35:32 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Jan 23 18:35:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 23 18:35:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3948518255,v1:192.168.122.100:6811/3948518255]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 23 18:35:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 23 18:35:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 18:35:32 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:32 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:32 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 18:35:32 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:32 compute-0 ceph-mon[75097]: pgmap v33: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 23 18:35:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 23 18:35:32 compute-0 ceph-mon[75097]: osdmap e12: 3 total, 1 up, 3 in
Jan 23 18:35:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 23 18:35:32 compute-0 ceph-mon[75097]: from='osd.2 [v2:192.168.122.100:6810/3948518255,v1:192.168.122.100:6811/3948518255]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 23 18:35:32 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/760103503' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 23 18:35:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:32 compute-0 sudo[88752]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iluqunyrzabrlrndpjwiitfxoegvkley ; /usr/bin/python3'
Jan 23 18:35:32 compute-0 sudo[88752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:35:32 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 23 18:35:32 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 23 18:35:32 compute-0 python3[88754]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:35:32 compute-0 podman[88775]: 2026-01-23 18:35:32.560701847 +0000 UTC m=+0.084853850 container create 2a7c8054b74dec54d0acbdf9fe3bff977641fe81df11d8f71621c5fcaf57dfa8 (image=quay.io/ceph/ceph:v20, name=keen_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 18:35:32 compute-0 podman[88775]: 2026-01-23 18:35:32.498523747 +0000 UTC m=+0.022675770 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:35:32 compute-0 systemd[1]: Started libpod-conmon-2a7c8054b74dec54d0acbdf9fe3bff977641fe81df11d8f71621c5fcaf57dfa8.scope.
Jan 23 18:35:32 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afad33fcea2b9b70710ff03aa651b76072b08b9d99bf35c4a33254c50ec86e47/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afad33fcea2b9b70710ff03aa651b76072b08b9d99bf35c4a33254c50ec86e47/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:32 compute-0 podman[88775]: 2026-01-23 18:35:32.770398051 +0000 UTC m=+0.294550074 container init 2a7c8054b74dec54d0acbdf9fe3bff977641fe81df11d8f71621c5fcaf57dfa8 (image=quay.io/ceph/ceph:v20, name=keen_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:35:32 compute-0 lvm[88848]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:35:32 compute-0 lvm[88848]: VG ceph_vg1 finished
Jan 23 18:35:32 compute-0 podman[88775]: 2026-01-23 18:35:32.782103359 +0000 UTC m=+0.306255362 container start 2a7c8054b74dec54d0acbdf9fe3bff977641fe81df11d8f71621c5fcaf57dfa8 (image=quay.io/ceph/ceph:v20, name=keen_volhard, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 23 18:35:32 compute-0 lvm[88845]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:35:32 compute-0 lvm[88845]: VG ceph_vg0 finished
Jan 23 18:35:32 compute-0 ceph-osd[87009]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 17.257 iops: 4417.908 elapsed_sec: 0.679
Jan 23 18:35:32 compute-0 ceph-osd[87009]: log_channel(cluster) log [WRN] : OSD bench result of 4417.908015 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 23 18:35:32 compute-0 podman[88775]: 2026-01-23 18:35:32.795068081 +0000 UTC m=+0.319220084 container attach 2a7c8054b74dec54d0acbdf9fe3bff977641fe81df11d8f71621c5fcaf57dfa8 (image=quay.io/ceph/ceph:v20, name=keen_volhard, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 23 18:35:32 compute-0 ceph-osd[87009]: osd.1 0 waiting for initial osdmap
Jan 23 18:35:32 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1[87005]: 2026-01-23T18:35:32.793+0000 7f17a55c7640 -1 osd.1 0 waiting for initial osdmap
Jan 23 18:35:32 compute-0 lvm[88849]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:35:32 compute-0 lvm[88849]: VG ceph_vg2 finished
Jan 23 18:35:32 compute-0 ceph-osd[87009]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 23 18:35:32 compute-0 ceph-osd[87009]: osd.1 13 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 23 18:35:32 compute-0 ceph-osd[87009]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 23 18:35:32 compute-0 ceph-osd[87009]: osd.1 13 check_osdmap_features require_osd_release unknown -> tentacle
Jan 23 18:35:32 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-1[87005]: 2026-01-23T18:35:32.827+0000 7f17a03cc640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 23 18:35:32 compute-0 ceph-osd[87009]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 23 18:35:32 compute-0 ceph-osd[87009]: osd.1 13 set_numa_affinity not setting numa affinity
Jan 23 18:35:32 compute-0 ceph-osd[87009]: osd.1 13 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Jan 23 18:35:32 compute-0 strange_lumiere[88723]: {}
Jan 23 18:35:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v36: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 23 18:35:32 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2862543050; not ready for session (expect reconnect)
Jan 23 18:35:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 18:35:32 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:32 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 18:35:32 compute-0 systemd[1]: libpod-8e9fe14690d9994d4a166583c931e41ebe44941d19dc7f9c2e01011b5c68403b.scope: Deactivated successfully.
Jan 23 18:35:32 compute-0 systemd[1]: libpod-8e9fe14690d9994d4a166583c931e41ebe44941d19dc7f9c2e01011b5c68403b.scope: Consumed 1.275s CPU time.
Jan 23 18:35:32 compute-0 podman[88873]: 2026-01-23 18:35:32.944863394 +0000 UTC m=+0.028880833 container died 8e9fe14690d9994d4a166583c931e41ebe44941d19dc7f9c2e01011b5c68403b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lumiere, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b49e96b70b8ede41a0fdde7b861e3d489720d155cde1ed2188f8a3bc542736c6-merged.mount: Deactivated successfully.
Jan 23 18:35:32 compute-0 podman[88873]: 2026-01-23 18:35:32.993152488 +0000 UTC m=+0.077169937 container remove 8e9fe14690d9994d4a166583c931e41ebe44941d19dc7f9c2e01011b5c68403b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 23 18:35:32 compute-0 systemd[1]: libpod-conmon-8e9fe14690d9994d4a166583c931e41ebe44941d19dc7f9c2e01011b5c68403b.scope: Deactivated successfully.
Jan 23 18:35:33 compute-0 sudo[88155]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:35:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:35:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 23 18:35:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3948518255,v1:192.168.122.100:6811/3948518255]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 23 18:35:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Jan 23 18:35:33 compute-0 ceph-osd[88072]: osd.2 0 done with init, starting boot process
Jan 23 18:35:33 compute-0 ceph-osd[88072]: osd.2 0 start_boot
Jan 23 18:35:33 compute-0 ceph-osd[88072]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 23 18:35:33 compute-0 ceph-osd[88072]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 23 18:35:33 compute-0 ceph-osd[88072]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 23 18:35:33 compute-0 ceph-osd[88072]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 23 18:35:33 compute-0 ceph-osd[88072]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 23 18:35:33 compute-0 sudo[88888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:35:33 compute-0 ceph-mon[75097]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/2862543050,v1:192.168.122.100:6807/2862543050] boot
Jan 23 18:35:33 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Jan 23 18:35:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 18:35:33 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:33 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:33 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:33 compute-0 sudo[88888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:33 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3948518255; not ready for session (expect reconnect)
Jan 23 18:35:33 compute-0 sudo[88888]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:33 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:33 compute-0 ceph-osd[87009]: osd.1 14 state: booting -> active
Jan 23 18:35:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[12,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:35:33 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 23 18:35:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3251039114' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 23 18:35:33 compute-0 sudo[88913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:35:33 compute-0 sudo[88913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:33 compute-0 sudo[88913]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 23 18:35:33 compute-0 ceph-mon[75097]: from='osd.2 [v2:192.168.122.100:6810/3948518255,v1:192.168.122.100:6811/3948518255]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 23 18:35:33 compute-0 ceph-mon[75097]: osdmap e13: 3 total, 1 up, 3 in
Jan 23 18:35:33 compute-0 ceph-mon[75097]: from='osd.2 [v2:192.168.122.100:6810/3948518255,v1:192.168.122.100:6811/3948518255]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 23 18:35:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:33 compute-0 ceph-mon[75097]: pgmap v36: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 23 18:35:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:33 compute-0 ceph-mon[75097]: from='osd.2 [v2:192.168.122.100:6810/3948518255,v1:192.168.122.100:6811/3948518255]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 23 18:35:33 compute-0 ceph-mon[75097]: osd.1 [v2:192.168.122.100:6806/2862543050,v1:192.168.122.100:6807/2862543050] boot
Jan 23 18:35:33 compute-0 ceph-mon[75097]: osdmap e14: 3 total, 2 up, 3 in
Jan 23 18:35:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 23 18:35:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:33 compute-0 sudo[88941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 23 18:35:33 compute-0 sudo[88941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:33 compute-0 podman[89009]: 2026-01-23 18:35:33.761978565 +0000 UTC m=+0.100617126 container exec 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:35:33 compute-0 podman[89009]: 2026-01-23 18:35:33.880002949 +0000 UTC m=+0.218641510 container exec_died 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 18:35:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 23 18:35:34 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3948518255; not ready for session (expect reconnect)
Jan 23 18:35:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:34 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:34 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:34 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3251039114' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 18:35:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Jan 23 18:35:34 compute-0 keen_volhard[88828]: pool 'vms' created
Jan 23 18:35:34 compute-0 systemd[1]: libpod-2a7c8054b74dec54d0acbdf9fe3bff977641fe81df11d8f71621c5fcaf57dfa8.scope: Deactivated successfully.
Jan 23 18:35:34 compute-0 podman[88775]: 2026-01-23 18:35:34.166398586 +0000 UTC m=+1.690550589 container died 2a7c8054b74dec54d0acbdf9fe3bff977641fe81df11d8f71621c5fcaf57dfa8 (image=quay.io/ceph/ceph:v20, name=keen_volhard, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 23 18:35:34 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Jan 23 18:35:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:34 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:34 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:34 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=14/15 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[12,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:35:34 compute-0 ceph-mon[75097]: OSD bench result of 4417.908015 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 23 18:35:34 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3251039114' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 23 18:35:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:34 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3251039114' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 18:35:34 compute-0 ceph-mon[75097]: osdmap e15: 3 total, 2 up, 3 in
Jan 23 18:35:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-afad33fcea2b9b70710ff03aa651b76072b08b9d99bf35c4a33254c50ec86e47-merged.mount: Deactivated successfully.
Jan 23 18:35:34 compute-0 ceph-mgr[75390]: [devicehealth INFO root] creating main.db for devicehealth
Jan 23 18:35:34 compute-0 podman[88775]: 2026-01-23 18:35:34.397109314 +0000 UTC m=+1.921261357 container remove 2a7c8054b74dec54d0acbdf9fe3bff977641fe81df11d8f71621c5fcaf57dfa8 (image=quay.io/ceph/ceph:v20, name=keen_volhard, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:35:34 compute-0 systemd[1]: libpod-conmon-2a7c8054b74dec54d0acbdf9fe3bff977641fe81df11d8f71621c5fcaf57dfa8.scope: Deactivated successfully.
Jan 23 18:35:34 compute-0 sudo[88752]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:34 compute-0 ceph-mgr[75390]: [devicehealth INFO root] Check health
Jan 23 18:35:34 compute-0 ceph-mgr[75390]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Jan 23 18:35:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 23 18:35:34 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 23 18:35:34 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 23 18:35:34 compute-0 sudo[89191]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtgbiypyozqcnucoyomkguapzggynyrt ; /usr/bin/python3'
Jan 23 18:35:34 compute-0 sudo[89192]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Jan 23 18:35:34 compute-0 sudo[89191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:35:34 compute-0 sudo[89192]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 23 18:35:34 compute-0 sudo[89192]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Jan 23 18:35:34 compute-0 sudo[89192]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:34 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 23 18:35:34 compute-0 sudo[88941]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:35:34 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:35:34 compute-0 python3[89201]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:35:34 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:34 compute-0 sudo[89222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:35:34 compute-0 sudo[89222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:34 compute-0 podman[89211]: 2026-01-23 18:35:34.79604758 +0000 UTC m=+0.075523314 container create 4fa257ac3a7c1262544203d50558338312abd9f2352cf5066c4c0c8728cdb990 (image=quay.io/ceph/ceph:v20, name=hardcore_gould, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 18:35:34 compute-0 sudo[89222]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:34 compute-0 podman[89211]: 2026-01-23 18:35:34.748416733 +0000 UTC m=+0.027892487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:35:34 compute-0 systemd[1]: Started libpod-conmon-4fa257ac3a7c1262544203d50558338312abd9f2352cf5066c4c0c8728cdb990.scope.
Jan 23 18:35:34 compute-0 sudo[89249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- inventory --format=json-pretty --filter-for-batch
Jan 23 18:35:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v39: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 23 18:35:34 compute-0 sudo[89249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0d029feada54679f3cdf65b44edf53520d4f9ebd44cbaba51795bea7cf948cf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0d029feada54679f3cdf65b44edf53520d4f9ebd44cbaba51795bea7cf948cf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:34 compute-0 podman[89211]: 2026-01-23 18:35:34.907191543 +0000 UTC m=+0.186667297 container init 4fa257ac3a7c1262544203d50558338312abd9f2352cf5066c4c0c8728cdb990 (image=quay.io/ceph/ceph:v20, name=hardcore_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 18:35:34 compute-0 podman[89211]: 2026-01-23 18:35:34.914660529 +0000 UTC m=+0.194136263 container start 4fa257ac3a7c1262544203d50558338312abd9f2352cf5066c4c0c8728cdb990 (image=quay.io/ceph/ceph:v20, name=hardcore_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:35:34 compute-0 podman[89211]: 2026-01-23 18:35:34.977973401 +0000 UTC m=+0.257449135 container attach 4fa257ac3a7c1262544203d50558338312abd9f2352cf5066c4c0c8728cdb990 (image=quay.io/ceph/ceph:v20, name=hardcore_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 18:35:35 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3948518255; not ready for session (expect reconnect)
Jan 23 18:35:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:35 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:35 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 23 18:35:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Jan 23 18:35:35 compute-0 podman[89311]: 2026-01-23 18:35:35.163170617 +0000 UTC m=+0.055594498 container create c2d06440510fb5add0ae74d812eca04b7a29b05125b05b82442e84b21c01aff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 23 18:35:35 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Jan 23 18:35:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:35 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:35 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:35 compute-0 podman[89311]: 2026-01-23 18:35:35.129321544 +0000 UTC m=+0.021745435 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:35 compute-0 systemd[1]: Started libpod-conmon-c2d06440510fb5add0ae74d812eca04b7a29b05125b05b82442e84b21c01aff2.scope.
Jan 23 18:35:35 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:35 compute-0 podman[89311]: 2026-01-23 18:35:35.278613633 +0000 UTC m=+0.171037504 container init c2d06440510fb5add0ae74d812eca04b7a29b05125b05b82442e84b21c01aff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:35:35 compute-0 podman[89311]: 2026-01-23 18:35:35.284247431 +0000 UTC m=+0.176671312 container start c2d06440510fb5add0ae74d812eca04b7a29b05125b05b82442e84b21c01aff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_saha, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 18:35:35 compute-0 cranky_saha[89327]: 167 167
Jan 23 18:35:35 compute-0 systemd[1]: libpod-c2d06440510fb5add0ae74d812eca04b7a29b05125b05b82442e84b21c01aff2.scope: Deactivated successfully.
Jan 23 18:35:35 compute-0 podman[89311]: 2026-01-23 18:35:35.311774848 +0000 UTC m=+0.204198739 container attach c2d06440510fb5add0ae74d812eca04b7a29b05125b05b82442e84b21c01aff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_saha, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:35 compute-0 podman[89311]: 2026-01-23 18:35:35.312447445 +0000 UTC m=+0.204871316 container died c2d06440510fb5add0ae74d812eca04b7a29b05125b05b82442e84b21c01aff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_saha, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 23 18:35:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1809416832' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 23 18:35:35 compute-0 ceph-mon[75097]: purged_snaps scrub starts
Jan 23 18:35:35 compute-0 ceph-mon[75097]: purged_snaps scrub ok
Jan 23 18:35:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 23 18:35:35 compute-0 ceph-mon[75097]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 23 18:35:35 compute-0 ceph-mon[75097]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 23 18:35:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:35 compute-0 ceph-mon[75097]: pgmap v39: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 23 18:35:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:35 compute-0 ceph-mon[75097]: osdmap e16: 3 total, 2 up, 3 in
Jan 23 18:35:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e93bca2a77f2a5baca9af63bfb61ded5c323d1369dd87ea82a72338a68c7f451-merged.mount: Deactivated successfully.
Jan 23 18:35:35 compute-0 podman[89311]: 2026-01-23 18:35:35.471594655 +0000 UTC m=+0.364018556 container remove c2d06440510fb5add0ae74d812eca04b7a29b05125b05b82442e84b21c01aff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_saha, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 18:35:35 compute-0 systemd[1]: libpod-conmon-c2d06440510fb5add0ae74d812eca04b7a29b05125b05b82442e84b21c01aff2.scope: Deactivated successfully.
Jan 23 18:35:35 compute-0 podman[89353]: 2026-01-23 18:35:35.632439749 +0000 UTC m=+0.054207472 container create 125692a3be62963a0265aa941968c487d127c8c03028b430035dafc48dacd502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bardeen, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:35:35 compute-0 systemd[1]: Started libpod-conmon-125692a3be62963a0265aa941968c487d127c8c03028b430035dafc48dacd502.scope.
Jan 23 18:35:35 compute-0 podman[89353]: 2026-01-23 18:35:35.602084378 +0000 UTC m=+0.023852171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:35 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb1d01574e60223265b563b15c2ed888977af32491ac58361eb22b5415ea3ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb1d01574e60223265b563b15c2ed888977af32491ac58361eb22b5415ea3ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb1d01574e60223265b563b15c2ed888977af32491ac58361eb22b5415ea3ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb1d01574e60223265b563b15c2ed888977af32491ac58361eb22b5415ea3ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:35 compute-0 podman[89353]: 2026-01-23 18:35:35.74693573 +0000 UTC m=+0.168703523 container init 125692a3be62963a0265aa941968c487d127c8c03028b430035dafc48dacd502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bardeen, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 18:35:35 compute-0 podman[89353]: 2026-01-23 18:35:35.76095317 +0000 UTC m=+0.182720883 container start 125692a3be62963a0265aa941968c487d127c8c03028b430035dafc48dacd502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bardeen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:35:35 compute-0 podman[89353]: 2026-01-23 18:35:35.783502685 +0000 UTC m=+0.205270408 container attach 125692a3be62963a0265aa941968c487d127c8c03028b430035dafc48dacd502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bardeen, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:35:35 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.rszfwe(active, since 66s)
Jan 23 18:35:36 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3948518255; not ready for session (expect reconnect)
Jan 23 18:35:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:36 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:36 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 23 18:35:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1809416832' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 18:35:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Jan 23 18:35:36 compute-0 hardcore_gould[89275]: pool 'volumes' created
Jan 23 18:35:36 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Jan 23 18:35:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:36 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:36 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:36 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 17 pg[3.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:35:36 compute-0 systemd[1]: libpod-4fa257ac3a7c1262544203d50558338312abd9f2352cf5066c4c0c8728cdb990.scope: Deactivated successfully.
Jan 23 18:35:36 compute-0 podman[89713]: 2026-01-23 18:35:36.272360964 +0000 UTC m=+0.035919810 container died 4fa257ac3a7c1262544203d50558338312abd9f2352cf5066c4c0c8728cdb990 (image=quay.io/ceph/ceph:v20, name=hardcore_gould, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 18:35:36 compute-0 bold_bardeen[89370]: [
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:     {
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:         "available": false,
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:         "being_replaced": false,
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:         "ceph_device_lvm": false,
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:         "lsm_data": {},
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:         "lvs": [],
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:         "path": "/dev/sr0",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:         "rejected_reasons": [
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "Insufficient space (<5GB)",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "Has a FileSystem"
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:         ],
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:         "sys_api": {
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "actuators": null,
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "device_nodes": [
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:                 "sr0"
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             ],
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "devname": "sr0",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "human_readable_size": "482.00 KB",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "id_bus": "ata",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "model": "QEMU DVD-ROM",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "nr_requests": "2",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "parent": "/dev/sr0",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "partitions": {},
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "path": "/dev/sr0",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "removable": "1",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "rev": "2.5+",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "ro": "0",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "rotational": "1",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "sas_address": "",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "sas_device_handle": "",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "scheduler_mode": "mq-deadline",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "sectors": 0,
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "sectorsize": "2048",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "size": 493568.0,
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "support_discard": "2048",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "type": "disk",
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:             "vendor": "QEMU"
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:         }
Jan 23 18:35:36 compute-0 bold_bardeen[89370]:     }
Jan 23 18:35:36 compute-0 bold_bardeen[89370]: ]
Jan 23 18:35:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0d029feada54679f3cdf65b44edf53520d4f9ebd44cbaba51795bea7cf948cf-merged.mount: Deactivated successfully.
Jan 23 18:35:36 compute-0 systemd[1]: libpod-125692a3be62963a0265aa941968c487d127c8c03028b430035dafc48dacd502.scope: Deactivated successfully.
Jan 23 18:35:36 compute-0 conmon[89370]: conmon 125692a3be62963a0265 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-125692a3be62963a0265aa941968c487d127c8c03028b430035dafc48dacd502.scope/container/memory.events
Jan 23 18:35:36 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1809416832' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 23 18:35:36 compute-0 ceph-mon[75097]: mgrmap e9: compute-0.rszfwe(active, since 66s)
Jan 23 18:35:36 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:36 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1809416832' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 18:35:36 compute-0 ceph-mon[75097]: osdmap e17: 3 total, 2 up, 3 in
Jan 23 18:35:36 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:36 compute-0 podman[89713]: 2026-01-23 18:35:36.419384353 +0000 UTC m=+0.182943189 container remove 4fa257ac3a7c1262544203d50558338312abd9f2352cf5066c4c0c8728cdb990 (image=quay.io/ceph/ceph:v20, name=hardcore_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 18:35:36 compute-0 systemd[1]: libpod-conmon-4fa257ac3a7c1262544203d50558338312abd9f2352cf5066c4c0c8728cdb990.scope: Deactivated successfully.
Jan 23 18:35:36 compute-0 podman[89353]: 2026-01-23 18:35:36.439623126 +0000 UTC m=+0.861390859 container died 125692a3be62963a0265aa941968c487d127c8c03028b430035dafc48dacd502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 23 18:35:36 compute-0 sudo[89191]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fb1d01574e60223265b563b15c2ed888977af32491ac58361eb22b5415ea3ac-merged.mount: Deactivated successfully.
Jan 23 18:35:36 compute-0 podman[89353]: 2026-01-23 18:35:36.548180281 +0000 UTC m=+0.969947994 container remove 125692a3be62963a0265aa941968c487d127c8c03028b430035dafc48dacd502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:36 compute-0 sudo[90220]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nommevalrqhavpovqlstuxibrhqmevot ; /usr/bin/python3'
Jan 23 18:35:36 compute-0 systemd[1]: libpod-conmon-125692a3be62963a0265aa941968c487d127c8c03028b430035dafc48dacd502.scope: Deactivated successfully.
Jan 23 18:35:36 compute-0 sudo[90220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:35:36 compute-0 sudo[89249]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:35:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:35:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 23 18:35:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 23 18:35:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 23 18:35:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 23 18:35:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 23 18:35:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 23 18:35:36 compute-0 ceph-mgr[75390]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43687k
Jan 23 18:35:36 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43687k
Jan 23 18:35:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 23 18:35:36 compute-0 ceph-mgr[75390]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44736375: error parsing value: Value '44736375' is below minimum 939524096
Jan 23 18:35:36 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44736375: error parsing value: Value '44736375' is below minimum 939524096
Jan 23 18:35:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:35:36 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:35:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:35:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:35:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:35:36 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:35:36 compute-0 python3[90222]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:35:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:35:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:35:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:35:36 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:35:36 compute-0 sudo[90224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:35:36 compute-0 sudo[90224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:36 compute-0 sudo[90224]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:36 compute-0 podman[90223]: 2026-01-23 18:35:36.804493464 +0000 UTC m=+0.070224664 container create af5b939e8a382e14efcd949b7485148cd1b5c24d044c860a240399f3d5bbcac1 (image=quay.io/ceph/ceph:v20, name=nifty_black, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:35:36 compute-0 sudo[90261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:35:36 compute-0 sudo[90261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v42: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 23 18:35:36 compute-0 systemd[1]: Started libpod-conmon-af5b939e8a382e14efcd949b7485148cd1b5c24d044c860a240399f3d5bbcac1.scope.
Jan 23 18:35:36 compute-0 podman[90223]: 2026-01-23 18:35:36.781804656 +0000 UTC m=+0.047535886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:35:36 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120aa0463401fd27ff479b1a160da6b513cc353305d8f3c102a289672d28e7a0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120aa0463401fd27ff479b1a160da6b513cc353305d8f3c102a289672d28e7a0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:36 compute-0 podman[90223]: 2026-01-23 18:35:36.925502297 +0000 UTC m=+0.191233537 container init af5b939e8a382e14efcd949b7485148cd1b5c24d044c860a240399f3d5bbcac1 (image=quay.io/ceph/ceph:v20, name=nifty_black, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 18:35:36 compute-0 podman[90223]: 2026-01-23 18:35:36.931639489 +0000 UTC m=+0.197370669 container start af5b939e8a382e14efcd949b7485148cd1b5c24d044c860a240399f3d5bbcac1 (image=quay.io/ceph/ceph:v20, name=nifty_black, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:36 compute-0 podman[90223]: 2026-01-23 18:35:36.958857307 +0000 UTC m=+0.224588527 container attach af5b939e8a382e14efcd949b7485148cd1b5c24d044c860a240399f3d5bbcac1 (image=quay.io/ceph/ceph:v20, name=nifty_black, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:37 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3948518255; not ready for session (expect reconnect)
Jan 23 18:35:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:37 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:37 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:37 compute-0 podman[90324]: 2026-01-23 18:35:37.157843887 +0000 UTC m=+0.041921327 container create d6d09337befa8897d7bb309e6e1a817e4a0ecb80a99a2a6845a7c1440bde9d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 23 18:35:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Jan 23 18:35:37 compute-0 systemd[1]: Started libpod-conmon-d6d09337befa8897d7bb309e6e1a817e4a0ecb80a99a2a6845a7c1440bde9d91.scope.
Jan 23 18:35:37 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Jan 23 18:35:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:37 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:37 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:35:37 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:37 compute-0 podman[90324]: 2026-01-23 18:35:37.138905998 +0000 UTC m=+0.022983268 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:37 compute-0 podman[90324]: 2026-01-23 18:35:37.262074918 +0000 UTC m=+0.146152208 container init d6d09337befa8897d7bb309e6e1a817e4a0ecb80a99a2a6845a7c1440bde9d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_swanson, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 18:35:37 compute-0 podman[90324]: 2026-01-23 18:35:37.268495677 +0000 UTC m=+0.152572907 container start d6d09337befa8897d7bb309e6e1a817e4a0ecb80a99a2a6845a7c1440bde9d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_swanson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:35:37 compute-0 silly_swanson[90340]: 167 167
Jan 23 18:35:37 compute-0 systemd[1]: libpod-d6d09337befa8897d7bb309e6e1a817e4a0ecb80a99a2a6845a7c1440bde9d91.scope: Deactivated successfully.
Jan 23 18:35:37 compute-0 conmon[90340]: conmon d6d09337befa8897d7bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d6d09337befa8897d7bb309e6e1a817e4a0ecb80a99a2a6845a7c1440bde9d91.scope/container/memory.events
Jan 23 18:35:37 compute-0 podman[90324]: 2026-01-23 18:35:37.27578165 +0000 UTC m=+0.159858890 container attach d6d09337befa8897d7bb309e6e1a817e4a0ecb80a99a2a6845a7c1440bde9d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 23 18:35:37 compute-0 podman[90324]: 2026-01-23 18:35:37.276137099 +0000 UTC m=+0.160214339 container died d6d09337befa8897d7bb309e6e1a817e4a0ecb80a99a2a6845a7c1440bde9d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 18:35:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-70fa02d388a4916ad3de6e6ee7325ea2c9cbb2b04eb99107d7f32d6acba52339-merged.mount: Deactivated successfully.
Jan 23 18:35:37 compute-0 podman[90324]: 2026-01-23 18:35:37.361157583 +0000 UTC m=+0.245234823 container remove d6d09337befa8897d7bb309e6e1a817e4a0ecb80a99a2a6845a7c1440bde9d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 23 18:35:37 compute-0 systemd[1]: libpod-conmon-d6d09337befa8897d7bb309e6e1a817e4a0ecb80a99a2a6845a7c1440bde9d91.scope: Deactivated successfully.
Jan 23 18:35:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 23 18:35:37 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1944732176' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 23 18:35:37 compute-0 ceph-osd[88072]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 26.741 iops: 6845.715 elapsed_sec: 0.438
Jan 23 18:35:37 compute-0 ceph-osd[88072]: log_channel(cluster) log [WRN] : OSD bench result of 6845.715082 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 23 18:35:37 compute-0 ceph-osd[88072]: osd.2 0 waiting for initial osdmap
Jan 23 18:35:37 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2[88068]: 2026-01-23T18:35:37.443+0000 7f42e3f95640 -1 osd.2 0 waiting for initial osdmap
Jan 23 18:35:37 compute-0 ceph-osd[88072]: osd.2 18 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 23 18:35:37 compute-0 ceph-osd[88072]: osd.2 18 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 23 18:35:37 compute-0 ceph-osd[88072]: osd.2 18 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 23 18:35:37 compute-0 ceph-osd[88072]: osd.2 18 check_osdmap_features require_osd_release unknown -> tentacle
Jan 23 18:35:37 compute-0 ceph-osd[88072]: osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 23 18:35:37 compute-0 ceph-osd[88072]: osd.2 18 set_numa_affinity not setting numa affinity
Jan 23 18:35:37 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-osd-2[88068]: 2026-01-23T18:35:37.470+0000 7f42de588640 -1 osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 23 18:35:37 compute-0 ceph-osd[88072]: osd.2 18 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Jan 23 18:35:37 compute-0 podman[90367]: 2026-01-23 18:35:37.50774463 +0000 UTC m=+0.037757568 container create 38de599a4032808eca4e4f32bfbcfb265f99b7704010ee71a292af5f1da382f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ramanujan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:35:37 compute-0 systemd[1]: Started libpod-conmon-38de599a4032808eca4e4f32bfbcfb265f99b7704010ee71a292af5f1da382f3.scope.
Jan 23 18:35:37 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae3033bec2b846575b5fcfbbd785598939befb0c9366b4eeef7d3f143df5021/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae3033bec2b846575b5fcfbbd785598939befb0c9366b4eeef7d3f143df5021/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae3033bec2b846575b5fcfbbd785598939befb0c9366b4eeef7d3f143df5021/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae3033bec2b846575b5fcfbbd785598939befb0c9366b4eeef7d3f143df5021/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae3033bec2b846575b5fcfbbd785598939befb0c9366b4eeef7d3f143df5021/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:37 compute-0 podman[90367]: 2026-01-23 18:35:37.489970621 +0000 UTC m=+0.019983569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:37 compute-0 podman[90367]: 2026-01-23 18:35:37.590501173 +0000 UTC m=+0.120514111 container init 38de599a4032808eca4e4f32bfbcfb265f99b7704010ee71a292af5f1da382f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ramanujan, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:35:37 compute-0 podman[90367]: 2026-01-23 18:35:37.597006675 +0000 UTC m=+0.127019613 container start 38de599a4032808eca4e4f32bfbcfb265f99b7704010ee71a292af5f1da382f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ramanujan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 18:35:37 compute-0 podman[90367]: 2026-01-23 18:35:37.600495448 +0000 UTC m=+0.130508386 container attach 38de599a4032808eca4e4f32bfbcfb265f99b7704010ee71a292af5f1da382f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ramanujan, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 23 18:35:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 23 18:35:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 23 18:35:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 23 18:35:37 compute-0 ceph-mon[75097]: Adjusting osd_memory_target on compute-0 to 43687k
Jan 23 18:35:37 compute-0 ceph-mon[75097]: Unable to set osd_memory_target on compute-0 to 44736375: error parsing value: Value '44736375' is below minimum 939524096
Jan 23 18:35:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:35:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:35:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:35:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:37 compute-0 ceph-mon[75097]: pgmap v42: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 23 18:35:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:37 compute-0 ceph-mon[75097]: osdmap e18: 3 total, 2 up, 3 in
Jan 23 18:35:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:37 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1944732176' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 23 18:35:38 compute-0 recursing_ramanujan[90382]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:35:38 compute-0 recursing_ramanujan[90382]: --> All data devices are unavailable
Jan 23 18:35:38 compute-0 systemd[1]: libpod-38de599a4032808eca4e4f32bfbcfb265f99b7704010ee71a292af5f1da382f3.scope: Deactivated successfully.
Jan 23 18:35:38 compute-0 podman[90367]: 2026-01-23 18:35:38.06602865 +0000 UTC m=+0.596041598 container died 38de599a4032808eca4e4f32bfbcfb265f99b7704010ee71a292af5f1da382f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ramanujan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 23 18:35:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ae3033bec2b846575b5fcfbbd785598939befb0c9366b4eeef7d3f143df5021-merged.mount: Deactivated successfully.
Jan 23 18:35:38 compute-0 ceph-mgr[75390]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3948518255; not ready for session (expect reconnect)
Jan 23 18:35:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:38 compute-0 ceph-mgr[75390]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 18:35:38 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:38 compute-0 podman[90367]: 2026-01-23 18:35:38.147191832 +0000 UTC m=+0.677204750 container remove 38de599a4032808eca4e4f32bfbcfb265f99b7704010ee71a292af5f1da382f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 23 18:35:38 compute-0 systemd[1]: libpod-conmon-38de599a4032808eca4e4f32bfbcfb265f99b7704010ee71a292af5f1da382f3.scope: Deactivated successfully.
Jan 23 18:35:38 compute-0 sudo[90261]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 23 18:35:38 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1944732176' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 18:35:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Jan 23 18:35:38 compute-0 nifty_black[90288]: pool 'backups' created
Jan 23 18:35:38 compute-0 ceph-mon[75097]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/3948518255,v1:192.168.122.100:6811/3948518255] boot
Jan 23 18:35:38 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Jan 23 18:35:38 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:35:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 18:35:38 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:38 compute-0 ceph-osd[88072]: osd.2 19 state: booting -> active
Jan 23 18:35:38 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 pi=[15,19)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:35:38 compute-0 systemd[1]: libpod-af5b939e8a382e14efcd949b7485148cd1b5c24d044c860a240399f3d5bbcac1.scope: Deactivated successfully.
Jan 23 18:35:38 compute-0 podman[90223]: 2026-01-23 18:35:38.237923246 +0000 UTC m=+1.503654426 container died af5b939e8a382e14efcd949b7485148cd1b5c24d044c860a240399f3d5bbcac1 (image=quay.io/ceph/ceph:v20, name=nifty_black, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:35:38 compute-0 sudo[90415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:35:38 compute-0 sudo[90415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:38 compute-0 sudo[90415]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:38 compute-0 sudo[90447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:35:38 compute-0 sudo[90447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-120aa0463401fd27ff479b1a160da6b513cc353305d8f3c102a289672d28e7a0-merged.mount: Deactivated successfully.
Jan 23 18:35:38 compute-0 podman[90223]: 2026-01-23 18:35:38.523664806 +0000 UTC m=+1.789395986 container remove af5b939e8a382e14efcd949b7485148cd1b5c24d044c860a240399f3d5bbcac1 (image=quay.io/ceph/ceph:v20, name=nifty_black, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:35:38 compute-0 systemd[1]: libpod-conmon-af5b939e8a382e14efcd949b7485148cd1b5c24d044c860a240399f3d5bbcac1.scope: Deactivated successfully.
Jan 23 18:35:38 compute-0 sudo[90220]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:38 compute-0 sudo[90527]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqlwbefhfegbyyybfmwksvsummfzvktr ; /usr/bin/python3'
Jan 23 18:35:38 compute-0 sudo[90527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:35:38 compute-0 podman[90490]: 2026-01-23 18:35:38.567844861 +0000 UTC m=+0.020371008 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:38 compute-0 python3[90529]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:35:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v45: 4 pgs: 3 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 23 18:35:39 compute-0 podman[90490]: 2026-01-23 18:35:39.190366207 +0000 UTC m=+0.642892304 container create 94450dbfb1b14d02fc69813d29d6f40ad99d1195c30540b3657573e673d01d72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 23 18:35:39 compute-0 ceph-mon[75097]: OSD bench result of 6845.715082 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 23 18:35:39 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:39 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1944732176' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 18:35:39 compute-0 ceph-mon[75097]: osd.2 [v2:192.168.122.100:6810/3948518255,v1:192.168.122.100:6811/3948518255] boot
Jan 23 18:35:39 compute-0 ceph-mon[75097]: osdmap e19: 3 total, 3 up, 3 in
Jan 23 18:35:39 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 23 18:35:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 23 18:35:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Jan 23 18:35:39 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Jan 23 18:35:39 compute-0 systemd[1]: Started libpod-conmon-94450dbfb1b14d02fc69813d29d6f40ad99d1195c30540b3657573e673d01d72.scope.
Jan 23 18:35:39 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:35:39 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=19/20 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 pi=[15,19)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:35:39 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:39 compute-0 podman[90530]: 2026-01-23 18:35:39.269853774 +0000 UTC m=+0.459306330 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:35:39 compute-0 podman[90490]: 2026-01-23 18:35:39.483969175 +0000 UTC m=+0.936495282 container init 94450dbfb1b14d02fc69813d29d6f40ad99d1195c30540b3657573e673d01d72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 23 18:35:39 compute-0 podman[90490]: 2026-01-23 18:35:39.494756108 +0000 UTC m=+0.947282225 container start 94450dbfb1b14d02fc69813d29d6f40ad99d1195c30540b3657573e673d01d72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:35:39 compute-0 adoring_golick[90543]: 167 167
Jan 23 18:35:39 compute-0 systemd[1]: libpod-94450dbfb1b14d02fc69813d29d6f40ad99d1195c30540b3657573e673d01d72.scope: Deactivated successfully.
Jan 23 18:35:39 compute-0 podman[90490]: 2026-01-23 18:35:39.588125282 +0000 UTC m=+1.040651399 container attach 94450dbfb1b14d02fc69813d29d6f40ad99d1195c30540b3657573e673d01d72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:39 compute-0 podman[90490]: 2026-01-23 18:35:39.588691297 +0000 UTC m=+1.041217404 container died 94450dbfb1b14d02fc69813d29d6f40ad99d1195c30540b3657573e673d01d72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_golick, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 18:35:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d706f1f9e480cca45c740b7140d7511a6fd51f19dc2c676470e55feabe47267-merged.mount: Deactivated successfully.
Jan 23 18:35:39 compute-0 podman[90530]: 2026-01-23 18:35:39.622879419 +0000 UTC m=+0.812331965 container create 7879e6e9d3d83939134d8dd1f8cc8a2999a6705e44626adea6784e46201c305b (image=quay.io/ceph/ceph:v20, name=unruffled_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:35:39 compute-0 podman[90490]: 2026-01-23 18:35:39.646359798 +0000 UTC m=+1.098885905 container remove 94450dbfb1b14d02fc69813d29d6f40ad99d1195c30540b3657573e673d01d72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 23 18:35:39 compute-0 systemd[1]: libpod-conmon-94450dbfb1b14d02fc69813d29d6f40ad99d1195c30540b3657573e673d01d72.scope: Deactivated successfully.
Jan 23 18:35:39 compute-0 systemd[1]: Started libpod-conmon-7879e6e9d3d83939134d8dd1f8cc8a2999a6705e44626adea6784e46201c305b.scope.
Jan 23 18:35:39 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a735e7e26231dbebdd7f4183d9e13a6885ac4d59f52ee21b2c88d882cc73426e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a735e7e26231dbebdd7f4183d9e13a6885ac4d59f52ee21b2c88d882cc73426e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 23 18:35:40 compute-0 podman[90530]: 2026-01-23 18:35:40.342740033 +0000 UTC m=+1.532192559 container init 7879e6e9d3d83939134d8dd1f8cc8a2999a6705e44626adea6784e46201c305b (image=quay.io/ceph/ceph:v20, name=unruffled_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:40 compute-0 podman[90530]: 2026-01-23 18:35:40.350638851 +0000 UTC m=+1.540091357 container start 7879e6e9d3d83939134d8dd1f8cc8a2999a6705e44626adea6784e46201c305b (image=quay.io/ceph/ceph:v20, name=unruffled_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 23 18:35:40 compute-0 ceph-mon[75097]: pgmap v45: 4 pgs: 3 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 23 18:35:40 compute-0 ceph-mon[75097]: osdmap e20: 3 total, 3 up, 3 in
Jan 23 18:35:40 compute-0 podman[90530]: 2026-01-23 18:35:40.373194126 +0000 UTC m=+1.562646662 container attach 7879e6e9d3d83939134d8dd1f8cc8a2999a6705e44626adea6784e46201c305b (image=quay.io/ceph/ceph:v20, name=unruffled_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 18:35:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Jan 23 18:35:40 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Jan 23 18:35:40 compute-0 podman[90577]: 2026-01-23 18:35:40.460701125 +0000 UTC m=+0.673664656 container create 5e32f8b1c452f1dea159f887eacdbdcaaa553bb21cba0f6ec78e63031d18e2cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_turing, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 23 18:35:40 compute-0 systemd[1]: Started libpod-conmon-5e32f8b1c452f1dea159f887eacdbdcaaa553bb21cba0f6ec78e63031d18e2cf.scope.
Jan 23 18:35:40 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:40 compute-0 podman[90577]: 2026-01-23 18:35:40.440907203 +0000 UTC m=+0.653870754 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693a43b2ab1f6a927ab40d4ba146bd7c9fbc8bb8e59b563801e4d0d7c584f6e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693a43b2ab1f6a927ab40d4ba146bd7c9fbc8bb8e59b563801e4d0d7c584f6e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693a43b2ab1f6a927ab40d4ba146bd7c9fbc8bb8e59b563801e4d0d7c584f6e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693a43b2ab1f6a927ab40d4ba146bd7c9fbc8bb8e59b563801e4d0d7c584f6e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:40 compute-0 podman[90577]: 2026-01-23 18:35:40.551604643 +0000 UTC m=+0.764568194 container init 5e32f8b1c452f1dea159f887eacdbdcaaa553bb21cba0f6ec78e63031d18e2cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_turing, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 23 18:35:40 compute-0 podman[90577]: 2026-01-23 18:35:40.557730516 +0000 UTC m=+0.770694087 container start 5e32f8b1c452f1dea159f887eacdbdcaaa553bb21cba0f6ec78e63031d18e2cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 23 18:35:40 compute-0 podman[90577]: 2026-01-23 18:35:40.56244088 +0000 UTC m=+0.775404441 container attach 5e32f8b1c452f1dea159f887eacdbdcaaa553bb21cba0f6ec78e63031d18e2cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_turing, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:35:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 23 18:35:40 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2827082891' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 23 18:35:40 compute-0 beautiful_turing[90613]: {
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:     "0": [
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:         {
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "devices": [
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "/dev/loop3"
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             ],
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "lv_name": "ceph_lv0",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "lv_size": "21470642176",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "name": "ceph_lv0",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "tags": {
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.cluster_name": "ceph",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.crush_device_class": "",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.encrypted": "0",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.objectstore": "bluestore",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.osd_id": "0",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.type": "block",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.vdo": "0",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.with_tpm": "0"
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             },
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "type": "block",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "vg_name": "ceph_vg0"
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:         }
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:     ],
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:     "1": [
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:         {
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "devices": [
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "/dev/loop4"
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             ],
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "lv_name": "ceph_lv1",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "lv_size": "21470642176",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "name": "ceph_lv1",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "tags": {
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.cluster_name": "ceph",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.crush_device_class": "",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.encrypted": "0",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.objectstore": "bluestore",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.osd_id": "1",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.type": "block",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.vdo": "0",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.with_tpm": "0"
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             },
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "type": "block",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "vg_name": "ceph_vg1"
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:         }
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:     ],
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:     "2": [
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:         {
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "devices": [
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "/dev/loop5"
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             ],
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "lv_name": "ceph_lv2",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "lv_size": "21470642176",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "name": "ceph_lv2",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "tags": {
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.cluster_name": "ceph",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.crush_device_class": "",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.encrypted": "0",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.objectstore": "bluestore",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.osd_id": "2",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.type": "block",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.vdo": "0",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:                 "ceph.with_tpm": "0"
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             },
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "type": "block",
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:             "vg_name": "ceph_vg2"
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:         }
Jan 23 18:35:40 compute-0 beautiful_turing[90613]:     ]
Jan 23 18:35:40 compute-0 beautiful_turing[90613]: }
Jan 23 18:35:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v48: 4 pgs: 1 creating+peering, 3 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:40 compute-0 systemd[1]: libpod-5e32f8b1c452f1dea159f887eacdbdcaaa553bb21cba0f6ec78e63031d18e2cf.scope: Deactivated successfully.
Jan 23 18:35:40 compute-0 podman[90577]: 2026-01-23 18:35:40.867830227 +0000 UTC m=+1.080793758 container died 5e32f8b1c452f1dea159f887eacdbdcaaa553bb21cba0f6ec78e63031d18e2cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_turing, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 18:35:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-693a43b2ab1f6a927ab40d4ba146bd7c9fbc8bb8e59b563801e4d0d7c584f6e7-merged.mount: Deactivated successfully.
Jan 23 18:35:40 compute-0 podman[90577]: 2026-01-23 18:35:40.924590885 +0000 UTC m=+1.137554416 container remove 5e32f8b1c452f1dea159f887eacdbdcaaa553bb21cba0f6ec78e63031d18e2cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_turing, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:40 compute-0 systemd[1]: libpod-conmon-5e32f8b1c452f1dea159f887eacdbdcaaa553bb21cba0f6ec78e63031d18e2cf.scope: Deactivated successfully.
Jan 23 18:35:40 compute-0 sudo[90447]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:41 compute-0 sudo[90637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:35:41 compute-0 sudo[90637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:41 compute-0 sudo[90637]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:41 compute-0 sudo[90662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:35:41 compute-0 sudo[90662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:41 compute-0 podman[90700]: 2026-01-23 18:35:41.364139903 +0000 UTC m=+0.039676198 container create 01ba9d0a767286d4710d64afc75e8c0128703e0a4357873507d1e17cd8249f3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 23 18:35:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 23 18:35:41 compute-0 systemd[1]: Started libpod-conmon-01ba9d0a767286d4710d64afc75e8c0128703e0a4357873507d1e17cd8249f3f.scope.
Jan 23 18:35:41 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2827082891' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 18:35:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Jan 23 18:35:41 compute-0 ceph-mon[75097]: osdmap e21: 3 total, 3 up, 3 in
Jan 23 18:35:41 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2827082891' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 23 18:35:41 compute-0 ceph-mon[75097]: pgmap v48: 4 pgs: 1 creating+peering, 3 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:41 compute-0 unruffled_newton[90569]: pool 'images' created
Jan 23 18:35:41 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Jan 23 18:35:41 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:41 compute-0 systemd[1]: libpod-7879e6e9d3d83939134d8dd1f8cc8a2999a6705e44626adea6784e46201c305b.scope: Deactivated successfully.
Jan 23 18:35:41 compute-0 podman[90530]: 2026-01-23 18:35:41.422467822 +0000 UTC m=+2.611920338 container died 7879e6e9d3d83939134d8dd1f8cc8a2999a6705e44626adea6784e46201c305b (image=quay.io/ceph/ceph:v20, name=unruffled_newton, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Jan 23 18:35:41 compute-0 podman[90700]: 2026-01-23 18:35:41.439319296 +0000 UTC m=+0.114855621 container init 01ba9d0a767286d4710d64afc75e8c0128703e0a4357873507d1e17cd8249f3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 18:35:41 compute-0 podman[90700]: 2026-01-23 18:35:41.343436607 +0000 UTC m=+0.018972922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:41 compute-0 podman[90700]: 2026-01-23 18:35:41.447097852 +0000 UTC m=+0.122634147 container start 01ba9d0a767286d4710d64afc75e8c0128703e0a4357873507d1e17cd8249f3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_moser, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 23 18:35:41 compute-0 podman[90700]: 2026-01-23 18:35:41.451755415 +0000 UTC m=+0.127291810 container attach 01ba9d0a767286d4710d64afc75e8c0128703e0a4357873507d1e17cd8249f3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 23 18:35:41 compute-0 gifted_moser[90716]: 167 167
Jan 23 18:35:41 compute-0 systemd[1]: libpod-01ba9d0a767286d4710d64afc75e8c0128703e0a4357873507d1e17cd8249f3f.scope: Deactivated successfully.
Jan 23 18:35:41 compute-0 podman[90700]: 2026-01-23 18:35:41.456627894 +0000 UTC m=+0.132164209 container died 01ba9d0a767286d4710d64afc75e8c0128703e0a4357873507d1e17cd8249f3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_moser, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 23 18:35:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a735e7e26231dbebdd7f4183d9e13a6885ac4d59f52ee21b2c88d882cc73426e-merged.mount: Deactivated successfully.
Jan 23 18:35:41 compute-0 podman[90530]: 2026-01-23 18:35:41.478721446 +0000 UTC m=+2.668173952 container remove 7879e6e9d3d83939134d8dd1f8cc8a2999a6705e44626adea6784e46201c305b (image=quay.io/ceph/ceph:v20, name=unruffled_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:41 compute-0 sudo[90527]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2188495aa33bf95f361210ce410dd22a6ddf8da51339a6fda386f39272f9ace-merged.mount: Deactivated successfully.
Jan 23 18:35:41 compute-0 systemd[1]: libpod-conmon-7879e6e9d3d83939134d8dd1f8cc8a2999a6705e44626adea6784e46201c305b.scope: Deactivated successfully.
Jan 23 18:35:41 compute-0 podman[90700]: 2026-01-23 18:35:41.518428414 +0000 UTC m=+0.193964709 container remove 01ba9d0a767286d4710d64afc75e8c0128703e0a4357873507d1e17cd8249f3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_moser, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:41 compute-0 systemd[1]: libpod-conmon-01ba9d0a767286d4710d64afc75e8c0128703e0a4357873507d1e17cd8249f3f.scope: Deactivated successfully.
Jan 23 18:35:41 compute-0 sudo[90771]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akufxpszjbazoraoevsrwokecdowsxoj ; /usr/bin/python3'
Jan 23 18:35:41 compute-0 sudo[90771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:35:41 compute-0 podman[90776]: 2026-01-23 18:35:41.666602074 +0000 UTC m=+0.047050433 container create 2e736a1082ef1fa1e03277ec302413e7fbf7d50b11dac51240dd07330c86b548 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wescoff, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:41 compute-0 systemd[1]: Started libpod-conmon-2e736a1082ef1fa1e03277ec302413e7fbf7d50b11dac51240dd07330c86b548.scope.
Jan 23 18:35:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 22 pg[5.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:35:41 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5836e419eb6829003e1106ab562ae7d778460dd22b9e6255fc3ad441033ade99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5836e419eb6829003e1106ab562ae7d778460dd22b9e6255fc3ad441033ade99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5836e419eb6829003e1106ab562ae7d778460dd22b9e6255fc3ad441033ade99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5836e419eb6829003e1106ab562ae7d778460dd22b9e6255fc3ad441033ade99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:41 compute-0 podman[90776]: 2026-01-23 18:35:41.640807103 +0000 UTC m=+0.021255462 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:41 compute-0 python3[90779]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:35:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:35:41 compute-0 podman[90776]: 2026-01-23 18:35:41.756693271 +0000 UTC m=+0.137141630 container init 2e736a1082ef1fa1e03277ec302413e7fbf7d50b11dac51240dd07330c86b548 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:35:41 compute-0 podman[90776]: 2026-01-23 18:35:41.763427879 +0000 UTC m=+0.143876218 container start 2e736a1082ef1fa1e03277ec302413e7fbf7d50b11dac51240dd07330c86b548 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wescoff, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 18:35:41 compute-0 podman[90776]: 2026-01-23 18:35:41.767767473 +0000 UTC m=+0.148215812 container attach 2e736a1082ef1fa1e03277ec302413e7fbf7d50b11dac51240dd07330c86b548 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wescoff, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 18:35:41 compute-0 podman[90798]: 2026-01-23 18:35:41.80517607 +0000 UTC m=+0.039691789 container create 889630aa82f09fd526d90d5246bf43f948e9a8b57235cf54b24fd15aee9e4ffa (image=quay.io/ceph/ceph:v20, name=determined_brahmagupta, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 23 18:35:41 compute-0 systemd[1]: Started libpod-conmon-889630aa82f09fd526d90d5246bf43f948e9a8b57235cf54b24fd15aee9e4ffa.scope.
Jan 23 18:35:41 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae6b4246856f3a85e9ae8a1032b164ecd23024f81752c498d9a05aa61440d44c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae6b4246856f3a85e9ae8a1032b164ecd23024f81752c498d9a05aa61440d44c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:41 compute-0 podman[90798]: 2026-01-23 18:35:41.788352956 +0000 UTC m=+0.022868695 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:35:41 compute-0 podman[90798]: 2026-01-23 18:35:41.904362517 +0000 UTC m=+0.138878256 container init 889630aa82f09fd526d90d5246bf43f948e9a8b57235cf54b24fd15aee9e4ffa (image=quay.io/ceph/ceph:v20, name=determined_brahmagupta, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 18:35:41 compute-0 podman[90798]: 2026-01-23 18:35:41.917682848 +0000 UTC m=+0.152198577 container start 889630aa82f09fd526d90d5246bf43f948e9a8b57235cf54b24fd15aee9e4ffa (image=quay.io/ceph/ceph:v20, name=determined_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:41 compute-0 podman[90798]: 2026-01-23 18:35:41.921933171 +0000 UTC m=+0.156448920 container attach 889630aa82f09fd526d90d5246bf43f948e9a8b57235cf54b24fd15aee9e4ffa (image=quay.io/ceph/ceph:v20, name=determined_brahmagupta, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:35:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 23 18:35:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3644556631' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 23 18:35:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 23 18:35:42 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2827082891' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 18:35:42 compute-0 ceph-mon[75097]: osdmap e22: 3 total, 3 up, 3 in
Jan 23 18:35:42 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3644556631' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 23 18:35:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3644556631' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 18:35:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Jan 23 18:35:42 compute-0 lvm[90915]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:35:42 compute-0 lvm[90915]: VG ceph_vg0 finished
Jan 23 18:35:42 compute-0 determined_brahmagupta[90816]: pool 'cephfs.cephfs.meta' created
Jan 23 18:35:42 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Jan 23 18:35:42 compute-0 lvm[90916]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:35:42 compute-0 lvm[90916]: VG ceph_vg1 finished
Jan 23 18:35:42 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:35:42 compute-0 systemd[1]: libpod-889630aa82f09fd526d90d5246bf43f948e9a8b57235cf54b24fd15aee9e4ffa.scope: Deactivated successfully.
Jan 23 18:35:42 compute-0 lvm[90919]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:35:42 compute-0 lvm[90919]: VG ceph_vg2 finished
Jan 23 18:35:42 compute-0 podman[90798]: 2026-01-23 18:35:42.436744945 +0000 UTC m=+0.671260674 container died 889630aa82f09fd526d90d5246bf43f948e9a8b57235cf54b24fd15aee9e4ffa (image=quay.io/ceph/ceph:v20, name=determined_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:35:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae6b4246856f3a85e9ae8a1032b164ecd23024f81752c498d9a05aa61440d44c-merged.mount: Deactivated successfully.
Jan 23 18:35:42 compute-0 podman[90798]: 2026-01-23 18:35:42.47829491 +0000 UTC m=+0.712810629 container remove 889630aa82f09fd526d90d5246bf43f948e9a8b57235cf54b24fd15aee9e4ffa (image=quay.io/ceph/ceph:v20, name=determined_brahmagupta, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 18:35:42 compute-0 systemd[1]: libpod-conmon-889630aa82f09fd526d90d5246bf43f948e9a8b57235cf54b24fd15aee9e4ffa.scope: Deactivated successfully.
Jan 23 18:35:42 compute-0 sudo[90771]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:42 compute-0 flamboyant_wescoff[90795]: {}
Jan 23 18:35:42 compute-0 systemd[1]: libpod-2e736a1082ef1fa1e03277ec302413e7fbf7d50b11dac51240dd07330c86b548.scope: Deactivated successfully.
Jan 23 18:35:42 compute-0 podman[90776]: 2026-01-23 18:35:42.531893825 +0000 UTC m=+0.912342164 container died 2e736a1082ef1fa1e03277ec302413e7fbf7d50b11dac51240dd07330c86b548 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wescoff, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:35:42 compute-0 systemd[1]: libpod-2e736a1082ef1fa1e03277ec302413e7fbf7d50b11dac51240dd07330c86b548.scope: Consumed 1.255s CPU time.
Jan 23 18:35:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5836e419eb6829003e1106ab562ae7d778460dd22b9e6255fc3ad441033ade99-merged.mount: Deactivated successfully.
Jan 23 18:35:42 compute-0 podman[90776]: 2026-01-23 18:35:42.583850066 +0000 UTC m=+0.964298415 container remove 2e736a1082ef1fa1e03277ec302413e7fbf7d50b11dac51240dd07330c86b548 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 23 18:35:42 compute-0 systemd[1]: libpod-conmon-2e736a1082ef1fa1e03277ec302413e7fbf7d50b11dac51240dd07330c86b548.scope: Deactivated successfully.
Jan 23 18:35:42 compute-0 sudo[90662]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:35:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:35:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:42 compute-0 sudo[90967]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwewceplbtjofatneatojdhmudzvwari ; /usr/bin/python3'
Jan 23 18:35:42 compute-0 sudo[90967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:35:42 compute-0 sudo[90969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:35:42 compute-0 sudo[90969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:42 compute-0 sudo[90969]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:42 compute-0 python3[90970]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:35:42 compute-0 podman[90995]: 2026-01-23 18:35:42.854989549 +0000 UTC m=+0.062288844 container create 87d276af88a46f5be5cc8b1433783b64591a96a4a1e03b8c1270b231f561ad74 (image=quay.io/ceph/ceph:v20, name=charming_buck, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 23 18:35:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v51: 6 pgs: 2 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:42 compute-0 systemd[1]: Started libpod-conmon-87d276af88a46f5be5cc8b1433783b64591a96a4a1e03b8c1270b231f561ad74.scope.
Jan 23 18:35:42 compute-0 podman[90995]: 2026-01-23 18:35:42.832859986 +0000 UTC m=+0.040159271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:35:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f33f92f5120310394ad00cf5062a3810418de4ec278a33f7e87df41409542f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f33f92f5120310394ad00cf5062a3810418de4ec278a33f7e87df41409542f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:42 compute-0 podman[90995]: 2026-01-23 18:35:42.947469849 +0000 UTC m=+0.154769124 container init 87d276af88a46f5be5cc8b1433783b64591a96a4a1e03b8c1270b231f561ad74 (image=quay.io/ceph/ceph:v20, name=charming_buck, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 23 18:35:42 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:35:42 compute-0 podman[90995]: 2026-01-23 18:35:42.956095818 +0000 UTC m=+0.163395083 container start 87d276af88a46f5be5cc8b1433783b64591a96a4a1e03b8c1270b231f561ad74 (image=quay.io/ceph/ceph:v20, name=charming_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:35:42 compute-0 podman[90995]: 2026-01-23 18:35:42.960244337 +0000 UTC m=+0.167543682 container attach 87d276af88a46f5be5cc8b1433783b64591a96a4a1e03b8c1270b231f561ad74 (image=quay.io/ceph/ceph:v20, name=charming_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 23 18:35:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 23 18:35:43 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/614599823' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 23 18:35:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 23 18:35:43 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/614599823' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 18:35:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Jan 23 18:35:43 compute-0 charming_buck[91011]: pool 'cephfs.cephfs.data' created
Jan 23 18:35:43 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Jan 23 18:35:43 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:35:43 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3644556631' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 18:35:43 compute-0 ceph-mon[75097]: osdmap e23: 3 total, 3 up, 3 in
Jan 23 18:35:43 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:43 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:43 compute-0 ceph-mon[75097]: pgmap v51: 6 pgs: 2 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:43 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/614599823' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 23 18:35:43 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 24 pg[6.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:35:43 compute-0 systemd[1]: libpod-87d276af88a46f5be5cc8b1433783b64591a96a4a1e03b8c1270b231f561ad74.scope: Deactivated successfully.
Jan 23 18:35:43 compute-0 podman[90995]: 2026-01-23 18:35:43.442243475 +0000 UTC m=+0.649542820 container died 87d276af88a46f5be5cc8b1433783b64591a96a4a1e03b8c1270b231f561ad74 (image=quay.io/ceph/ceph:v20, name=charming_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-58f33f92f5120310394ad00cf5062a3810418de4ec278a33f7e87df41409542f-merged.mount: Deactivated successfully.
Jan 23 18:35:43 compute-0 podman[90995]: 2026-01-23 18:35:43.483737309 +0000 UTC m=+0.691036564 container remove 87d276af88a46f5be5cc8b1433783b64591a96a4a1e03b8c1270b231f561ad74 (image=quay.io/ceph/ceph:v20, name=charming_buck, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:35:43 compute-0 systemd[1]: libpod-conmon-87d276af88a46f5be5cc8b1433783b64591a96a4a1e03b8c1270b231f561ad74.scope: Deactivated successfully.
Jan 23 18:35:43 compute-0 sudo[90967]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:43 compute-0 sudo[91075]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhxnewpypohfarvdfqjwtlrokqdcgwcu ; /usr/bin/python3'
Jan 23 18:35:43 compute-0 sudo[91075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:35:43 compute-0 python3[91077]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:35:43 compute-0 podman[91078]: 2026-01-23 18:35:43.891652063 +0000 UTC m=+0.055918706 container create 559c9c8f39ca1c7983851ae53bd69ffa25be61b25bca347bfbbb59a58055fc81 (image=quay.io/ceph/ceph:v20, name=focused_lovelace, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 23 18:35:43 compute-0 systemd[1]: Started libpod-conmon-559c9c8f39ca1c7983851ae53bd69ffa25be61b25bca347bfbbb59a58055fc81.scope.
Jan 23 18:35:43 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc7a8476f8f231d931bf89d2b2b74c3be33829097b6fb223ec40dd581497c843/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc7a8476f8f231d931bf89d2b2b74c3be33829097b6fb223ec40dd581497c843/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:43 compute-0 podman[91078]: 2026-01-23 18:35:43.875437475 +0000 UTC m=+0.039704108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:35:43 compute-0 podman[91078]: 2026-01-23 18:35:43.972934868 +0000 UTC m=+0.137201511 container init 559c9c8f39ca1c7983851ae53bd69ffa25be61b25bca347bfbbb59a58055fc81 (image=quay.io/ceph/ceph:v20, name=focused_lovelace, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:43 compute-0 podman[91078]: 2026-01-23 18:35:43.979944733 +0000 UTC m=+0.144211396 container start 559c9c8f39ca1c7983851ae53bd69ffa25be61b25bca347bfbbb59a58055fc81 (image=quay.io/ceph/ceph:v20, name=focused_lovelace, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 23 18:35:43 compute-0 podman[91078]: 2026-01-23 18:35:43.984699248 +0000 UTC m=+0.148965921 container attach 559c9c8f39ca1c7983851ae53bd69ffa25be61b25bca347bfbbb59a58055fc81 (image=quay.io/ceph/ceph:v20, name=focused_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 18:35:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 23 18:35:44 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2094401545' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 23 18:35:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 23 18:35:44 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2094401545' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 23 18:35:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Jan 23 18:35:44 compute-0 focused_lovelace[91093]: enabled application 'rbd' on pool 'vms'
Jan 23 18:35:44 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Jan 23 18:35:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:35:44 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/614599823' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 18:35:44 compute-0 ceph-mon[75097]: osdmap e24: 3 total, 3 up, 3 in
Jan 23 18:35:44 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2094401545' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 23 18:35:44 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2094401545' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 23 18:35:44 compute-0 ceph-mon[75097]: osdmap e25: 3 total, 3 up, 3 in
Jan 23 18:35:44 compute-0 systemd[1]: libpod-559c9c8f39ca1c7983851ae53bd69ffa25be61b25bca347bfbbb59a58055fc81.scope: Deactivated successfully.
Jan 23 18:35:44 compute-0 podman[91078]: 2026-01-23 18:35:44.446507623 +0000 UTC m=+0.610774246 container died 559c9c8f39ca1c7983851ae53bd69ffa25be61b25bca347bfbbb59a58055fc81 (image=quay.io/ceph/ceph:v20, name=focused_lovelace, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 18:35:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc7a8476f8f231d931bf89d2b2b74c3be33829097b6fb223ec40dd581497c843-merged.mount: Deactivated successfully.
Jan 23 18:35:44 compute-0 podman[91078]: 2026-01-23 18:35:44.495795793 +0000 UTC m=+0.660062416 container remove 559c9c8f39ca1c7983851ae53bd69ffa25be61b25bca347bfbbb59a58055fc81 (image=quay.io/ceph/ceph:v20, name=focused_lovelace, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 23 18:35:44 compute-0 systemd[1]: libpod-conmon-559c9c8f39ca1c7983851ae53bd69ffa25be61b25bca347bfbbb59a58055fc81.scope: Deactivated successfully.
Jan 23 18:35:44 compute-0 sudo[91075]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:44 compute-0 sudo[91154]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rziqrztrspumopjjtcgzvnzeztdawnvp ; /usr/bin/python3'
Jan 23 18:35:44 compute-0 sudo[91154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:35:44 compute-0 python3[91156]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:35:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v54: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:44 compute-0 podman[91157]: 2026-01-23 18:35:44.902629778 +0000 UTC m=+0.048378198 container create 06fa5f2186d6f3a2ed843f730cb3f7f4cac1c0841e3a69b24c269c0192ffe7ba (image=quay.io/ceph/ceph:v20, name=fervent_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:35:44 compute-0 systemd[1]: Started libpod-conmon-06fa5f2186d6f3a2ed843f730cb3f7f4cac1c0841e3a69b24c269c0192ffe7ba.scope.
Jan 23 18:35:44 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c042a34a2ae32bc51baf52bee06528eebc7929ec4e4e9f2bf07d2128a8d91e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c042a34a2ae32bc51baf52bee06528eebc7929ec4e4e9f2bf07d2128a8d91e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:44 compute-0 podman[91157]: 2026-01-23 18:35:44.960897785 +0000 UTC m=+0.106646195 container init 06fa5f2186d6f3a2ed843f730cb3f7f4cac1c0841e3a69b24c269c0192ffe7ba (image=quay.io/ceph/ceph:v20, name=fervent_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 23 18:35:44 compute-0 podman[91157]: 2026-01-23 18:35:44.971572477 +0000 UTC m=+0.117320867 container start 06fa5f2186d6f3a2ed843f730cb3f7f4cac1c0841e3a69b24c269c0192ffe7ba (image=quay.io/ceph/ceph:v20, name=fervent_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 23 18:35:44 compute-0 podman[91157]: 2026-01-23 18:35:44.882800845 +0000 UTC m=+0.028549255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:35:44 compute-0 podman[91157]: 2026-01-23 18:35:44.97661251 +0000 UTC m=+0.122360900 container attach 06fa5f2186d6f3a2ed843f730cb3f7f4cac1c0841e3a69b24c269c0192ffe7ba (image=quay.io/ceph/ceph:v20, name=fervent_kilby, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 18:35:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 23 18:35:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1860544413' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 23 18:35:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 23 18:35:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1860544413' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 23 18:35:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Jan 23 18:35:45 compute-0 fervent_kilby[91171]: enabled application 'rbd' on pool 'volumes'
Jan 23 18:35:45 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Jan 23 18:35:45 compute-0 ceph-mon[75097]: pgmap v54: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:45 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1860544413' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 23 18:35:45 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1860544413' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 23 18:35:45 compute-0 ceph-mon[75097]: osdmap e26: 3 total, 3 up, 3 in
Jan 23 18:35:45 compute-0 systemd[1]: libpod-06fa5f2186d6f3a2ed843f730cb3f7f4cac1c0841e3a69b24c269c0192ffe7ba.scope: Deactivated successfully.
Jan 23 18:35:45 compute-0 podman[91157]: 2026-01-23 18:35:45.457444847 +0000 UTC m=+0.603193267 container died 06fa5f2186d6f3a2ed843f730cb3f7f4cac1c0841e3a69b24c269c0192ffe7ba (image=quay.io/ceph/ceph:v20, name=fervent_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:35:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-71c042a34a2ae32bc51baf52bee06528eebc7929ec4e4e9f2bf07d2128a8d91e-merged.mount: Deactivated successfully.
Jan 23 18:35:45 compute-0 podman[91157]: 2026-01-23 18:35:45.510758554 +0000 UTC m=+0.656506944 container remove 06fa5f2186d6f3a2ed843f730cb3f7f4cac1c0841e3a69b24c269c0192ffe7ba (image=quay.io/ceph/ceph:v20, name=fervent_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 18:35:45 compute-0 systemd[1]: libpod-conmon-06fa5f2186d6f3a2ed843f730cb3f7f4cac1c0841e3a69b24c269c0192ffe7ba.scope: Deactivated successfully.
Jan 23 18:35:45 compute-0 sudo[91154]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:45 compute-0 sudo[91231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vechukldvtyewiperzyjozmxjqhpaddl ; /usr/bin/python3'
Jan 23 18:35:45 compute-0 sudo[91231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:35:45 compute-0 python3[91233]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:35:45 compute-0 podman[91234]: 2026-01-23 18:35:45.82974205 +0000 UTC m=+0.044198727 container create 54a67e57a6b841da49a5ff999848c6f827463a5f4ef4e465058d846eef404d9c (image=quay.io/ceph/ceph:v20, name=zen_kapitsa, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 18:35:45 compute-0 systemd[1]: Started libpod-conmon-54a67e57a6b841da49a5ff999848c6f827463a5f4ef4e465058d846eef404d9c.scope.
Jan 23 18:35:45 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78cfa96fd65f8da575faa29f83b795204a9d393fd46b545bdba1e77a46325c30/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78cfa96fd65f8da575faa29f83b795204a9d393fd46b545bdba1e77a46325c30/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:45 compute-0 podman[91234]: 2026-01-23 18:35:45.80775894 +0000 UTC m=+0.022215667 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:35:45 compute-0 podman[91234]: 2026-01-23 18:35:45.909283569 +0000 UTC m=+0.123740356 container init 54a67e57a6b841da49a5ff999848c6f827463a5f4ef4e465058d846eef404d9c (image=quay.io/ceph/ceph:v20, name=zen_kapitsa, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:45 compute-0 podman[91234]: 2026-01-23 18:35:45.917156636 +0000 UTC m=+0.131613313 container start 54a67e57a6b841da49a5ff999848c6f827463a5f4ef4e465058d846eef404d9c (image=quay.io/ceph/ceph:v20, name=zen_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 23 18:35:45 compute-0 podman[91234]: 2026-01-23 18:35:45.920593038 +0000 UTC m=+0.135049715 container attach 54a67e57a6b841da49a5ff999848c6f827463a5f4ef4e465058d846eef404d9c (image=quay.io/ceph/ceph:v20, name=zen_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:35:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 23 18:35:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2997951415' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 23 18:35:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 23 18:35:46 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2997951415' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 23 18:35:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2997951415' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 23 18:35:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Jan 23 18:35:46 compute-0 zen_kapitsa[91250]: enabled application 'rbd' on pool 'backups'
Jan 23 18:35:46 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Jan 23 18:35:46 compute-0 systemd[1]: libpod-54a67e57a6b841da49a5ff999848c6f827463a5f4ef4e465058d846eef404d9c.scope: Deactivated successfully.
Jan 23 18:35:46 compute-0 podman[91234]: 2026-01-23 18:35:46.497217032 +0000 UTC m=+0.711673749 container died 54a67e57a6b841da49a5ff999848c6f827463a5f4ef4e465058d846eef404d9c (image=quay.io/ceph/ceph:v20, name=zen_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 18:35:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-78cfa96fd65f8da575faa29f83b795204a9d393fd46b545bdba1e77a46325c30-merged.mount: Deactivated successfully.
Jan 23 18:35:46 compute-0 podman[91234]: 2026-01-23 18:35:46.552273795 +0000 UTC m=+0.766730512 container remove 54a67e57a6b841da49a5ff999848c6f827463a5f4ef4e465058d846eef404d9c (image=quay.io/ceph/ceph:v20, name=zen_kapitsa, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:35:46 compute-0 sudo[91231]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:46 compute-0 systemd[1]: libpod-conmon-54a67e57a6b841da49a5ff999848c6f827463a5f4ef4e465058d846eef404d9c.scope: Deactivated successfully.
Jan 23 18:35:46 compute-0 sudo[91309]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojlvfspjkaxcunyxlfqbususodldhbck ; /usr/bin/python3'
Jan 23 18:35:46 compute-0 sudo[91309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:35:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:35:46 compute-0 python3[91311]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:35:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:46 compute-0 podman[91312]: 2026-01-23 18:35:46.908028541 +0000 UTC m=+0.052252409 container create d7fa7d63858cb3ec1e56838342a31022afdfab713ef0825ae9b29edc6b93286f (image=quay.io/ceph/ceph:v20, name=laughing_swirles, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 18:35:46 compute-0 systemd[1]: Started libpod-conmon-d7fa7d63858cb3ec1e56838342a31022afdfab713ef0825ae9b29edc6b93286f.scope.
Jan 23 18:35:46 compute-0 podman[91312]: 2026-01-23 18:35:46.883771842 +0000 UTC m=+0.027995710 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:35:46 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74388b68bb9f1bbcdb005d5afd19b68ca1757e1cd100b231a132f3fb16823e9e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74388b68bb9f1bbcdb005d5afd19b68ca1757e1cd100b231a132f3fb16823e9e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:47 compute-0 podman[91312]: 2026-01-23 18:35:47.006594353 +0000 UTC m=+0.150818201 container init d7fa7d63858cb3ec1e56838342a31022afdfab713ef0825ae9b29edc6b93286f (image=quay.io/ceph/ceph:v20, name=laughing_swirles, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:35:47 compute-0 podman[91312]: 2026-01-23 18:35:47.015682922 +0000 UTC m=+0.159906770 container start d7fa7d63858cb3ec1e56838342a31022afdfab713ef0825ae9b29edc6b93286f (image=quay.io/ceph/ceph:v20, name=laughing_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 23 18:35:47 compute-0 podman[91312]: 2026-01-23 18:35:47.020117029 +0000 UTC m=+0.164340877 container attach d7fa7d63858cb3ec1e56838342a31022afdfab713ef0825ae9b29edc6b93286f (image=quay.io/ceph/ceph:v20, name=laughing_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:35:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 23 18:35:47 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2154201928' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 23 18:35:47 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2997951415' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 23 18:35:47 compute-0 ceph-mon[75097]: osdmap e27: 3 total, 3 up, 3 in
Jan 23 18:35:47 compute-0 ceph-mon[75097]: pgmap v57: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 23 18:35:47 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2154201928' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 23 18:35:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Jan 23 18:35:47 compute-0 laughing_swirles[91327]: enabled application 'rbd' on pool 'images'
Jan 23 18:35:47 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Jan 23 18:35:47 compute-0 systemd[1]: libpod-d7fa7d63858cb3ec1e56838342a31022afdfab713ef0825ae9b29edc6b93286f.scope: Deactivated successfully.
Jan 23 18:35:47 compute-0 podman[91312]: 2026-01-23 18:35:47.57007455 +0000 UTC m=+0.714298418 container died d7fa7d63858cb3ec1e56838342a31022afdfab713ef0825ae9b29edc6b93286f (image=quay.io/ceph/ceph:v20, name=laughing_swirles, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:35:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-74388b68bb9f1bbcdb005d5afd19b68ca1757e1cd100b231a132f3fb16823e9e-merged.mount: Deactivated successfully.
Jan 23 18:35:47 compute-0 podman[91312]: 2026-01-23 18:35:47.611473273 +0000 UTC m=+0.755697111 container remove d7fa7d63858cb3ec1e56838342a31022afdfab713ef0825ae9b29edc6b93286f (image=quay.io/ceph/ceph:v20, name=laughing_swirles, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 23 18:35:47 compute-0 systemd[1]: libpod-conmon-d7fa7d63858cb3ec1e56838342a31022afdfab713ef0825ae9b29edc6b93286f.scope: Deactivated successfully.
Jan 23 18:35:47 compute-0 sudo[91309]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:47 compute-0 sudo[91385]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blrrgrwlszmcukwhboadbqngmjbgnvxl ; /usr/bin/python3'
Jan 23 18:35:47 compute-0 sudo[91385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:35:48 compute-0 python3[91387]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:35:48 compute-0 podman[91388]: 2026-01-23 18:35:48.050930697 +0000 UTC m=+0.032572570 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:35:49 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:49 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2154201928' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 23 18:35:49 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2154201928' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 23 18:35:49 compute-0 ceph-mon[75097]: osdmap e28: 3 total, 3 up, 3 in
Jan 23 18:35:49 compute-0 podman[91388]: 2026-01-23 18:35:49.85083772 +0000 UTC m=+1.832479543 container create ec06e02c82d1d1d9d667ec425452a8b6118bd956a7a89a5336088b9c489b538b (image=quay.io/ceph/ceph:v20, name=cranky_allen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 18:35:49 compute-0 systemd[1]: Started libpod-conmon-ec06e02c82d1d1d9d667ec425452a8b6118bd956a7a89a5336088b9c489b538b.scope.
Jan 23 18:35:49 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70824043f402ef181547b277d423b77f9f1fd7b326585dffcd8cf37c8727fde1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70824043f402ef181547b277d423b77f9f1fd7b326585dffcd8cf37c8727fde1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:49 compute-0 podman[91388]: 2026-01-23 18:35:49.932807002 +0000 UTC m=+1.914448885 container init ec06e02c82d1d1d9d667ec425452a8b6118bd956a7a89a5336088b9c489b538b (image=quay.io/ceph/ceph:v20, name=cranky_allen, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:35:49 compute-0 podman[91388]: 2026-01-23 18:35:49.93917108 +0000 UTC m=+1.920812913 container start ec06e02c82d1d1d9d667ec425452a8b6118bd956a7a89a5336088b9c489b538b (image=quay.io/ceph/ceph:v20, name=cranky_allen, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 18:35:49 compute-0 podman[91388]: 2026-01-23 18:35:49.942473487 +0000 UTC m=+1.924115320 container attach ec06e02c82d1d1d9d667ec425452a8b6118bd956a7a89a5336088b9c489b538b (image=quay.io/ceph/ceph:v20, name=cranky_allen, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 18:35:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 23 18:35:50 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2274263700' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 23 18:35:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 23 18:35:50 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2274263700' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 23 18:35:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Jan 23 18:35:50 compute-0 cranky_allen[91403]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 23 18:35:50 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Jan 23 18:35:50 compute-0 ceph-mon[75097]: pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:50 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2274263700' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 23 18:35:50 compute-0 systemd[1]: libpod-ec06e02c82d1d1d9d667ec425452a8b6118bd956a7a89a5336088b9c489b538b.scope: Deactivated successfully.
Jan 23 18:35:50 compute-0 podman[91388]: 2026-01-23 18:35:50.948005099 +0000 UTC m=+2.929646932 container died ec06e02c82d1d1d9d667ec425452a8b6118bd956a7a89a5336088b9c489b538b (image=quay.io/ceph/ceph:v20, name=cranky_allen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:35:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-70824043f402ef181547b277d423b77f9f1fd7b326585dffcd8cf37c8727fde1-merged.mount: Deactivated successfully.
Jan 23 18:35:50 compute-0 podman[91388]: 2026-01-23 18:35:50.994230659 +0000 UTC m=+2.975872502 container remove ec06e02c82d1d1d9d667ec425452a8b6118bd956a7a89a5336088b9c489b538b (image=quay.io/ceph/ceph:v20, name=cranky_allen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:35:51 compute-0 sudo[91385]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:51 compute-0 systemd[1]: libpod-conmon-ec06e02c82d1d1d9d667ec425452a8b6118bd956a7a89a5336088b9c489b538b.scope: Deactivated successfully.
Jan 23 18:35:51 compute-0 sudo[91462]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjanylfjmeuqlmklwlgijzbujzyozpdx ; /usr/bin/python3'
Jan 23 18:35:51 compute-0 sudo[91462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:35:51 compute-0 python3[91464]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:35:51 compute-0 podman[91465]: 2026-01-23 18:35:51.350037587 +0000 UTC m=+0.067815521 container create 71f644d325fa040fc1ac279bab40051cb135a5c6cc008320779e1dad22dea326 (image=quay.io/ceph/ceph:v20, name=sleepy_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 18:35:51 compute-0 systemd[1]: Started libpod-conmon-71f644d325fa040fc1ac279bab40051cb135a5c6cc008320779e1dad22dea326.scope.
Jan 23 18:35:51 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8657432ca80f1bf51e1a31b4599cd22fb3a0d8d597200d7073d962fe9748554e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8657432ca80f1bf51e1a31b4599cd22fb3a0d8d597200d7073d962fe9748554e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:51 compute-0 podman[91465]: 2026-01-23 18:35:51.326086135 +0000 UTC m=+0.043864059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:35:51 compute-0 podman[91465]: 2026-01-23 18:35:51.427763157 +0000 UTC m=+0.145541061 container init 71f644d325fa040fc1ac279bab40051cb135a5c6cc008320779e1dad22dea326 (image=quay.io/ceph/ceph:v20, name=sleepy_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 23 18:35:51 compute-0 podman[91465]: 2026-01-23 18:35:51.433745315 +0000 UTC m=+0.151523199 container start 71f644d325fa040fc1ac279bab40051cb135a5c6cc008320779e1dad22dea326 (image=quay.io/ceph/ceph:v20, name=sleepy_kirch, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:35:51 compute-0 podman[91465]: 2026-01-23 18:35:51.437536755 +0000 UTC m=+0.155314679 container attach 71f644d325fa040fc1ac279bab40051cb135a5c6cc008320779e1dad22dea326 (image=quay.io/ceph/ceph:v20, name=sleepy_kirch, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:35:51 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 23 18:35:51 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4265751146' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 23 18:35:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 23 18:35:51 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2274263700' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 23 18:35:51 compute-0 ceph-mon[75097]: osdmap e29: 3 total, 3 up, 3 in
Jan 23 18:35:51 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4265751146' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 23 18:35:51 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4265751146' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 23 18:35:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Jan 23 18:35:51 compute-0 sleepy_kirch[91479]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 23 18:35:51 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Jan 23 18:35:51 compute-0 systemd[1]: libpod-71f644d325fa040fc1ac279bab40051cb135a5c6cc008320779e1dad22dea326.scope: Deactivated successfully.
Jan 23 18:35:51 compute-0 podman[91465]: 2026-01-23 18:35:51.963835732 +0000 UTC m=+0.681613616 container died 71f644d325fa040fc1ac279bab40051cb135a5c6cc008320779e1dad22dea326 (image=quay.io/ceph/ceph:v20, name=sleepy_kirch, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:35:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-8657432ca80f1bf51e1a31b4599cd22fb3a0d8d597200d7073d962fe9748554e-merged.mount: Deactivated successfully.
Jan 23 18:35:52 compute-0 podman[91465]: 2026-01-23 18:35:52.013720109 +0000 UTC m=+0.731497993 container remove 71f644d325fa040fc1ac279bab40051cb135a5c6cc008320779e1dad22dea326 (image=quay.io/ceph/ceph:v20, name=sleepy_kirch, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:35:52 compute-0 systemd[1]: libpod-conmon-71f644d325fa040fc1ac279bab40051cb135a5c6cc008320779e1dad22dea326.scope: Deactivated successfully.
Jan 23 18:35:52 compute-0 sudo[91462]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:52 compute-0 ceph-mon[75097]: pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:52 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4265751146' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 23 18:35:52 compute-0 ceph-mon[75097]: osdmap e30: 3 total, 3 up, 3 in
Jan 23 18:35:52 compute-0 python3[91592]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 18:35:53 compute-0 python3[91663]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769193352.6961653-36555-243460204606503/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:35:53 compute-0 sudo[91763]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfuysorchavhvoqpzljtwsqjfqntgxhp ; /usr/bin/python3'
Jan 23 18:35:53 compute-0 sudo[91763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:35:53 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:53 compute-0 python3[91765]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 18:35:54 compute-0 sudo[91763]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:54 compute-0 sudo[91838]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmtcehbxhlsryjbywixbfivyqnudxfut ; /usr/bin/python3'
Jan 23 18:35:54 compute-0 sudo[91838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:35:54 compute-0 python3[91840]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769193353.6290584-36569-82389159996515/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=7d3f3993f3cf4d26c75d1922131de4ede86be8e4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:35:54 compute-0 sudo[91838]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:54 compute-0 sudo[91888]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrixflioxmnakqrvuyyoyczxouzupdev ; /usr/bin/python3'
Jan 23 18:35:54 compute-0 sudo[91888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:35:54 compute-0 python3[91890]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:35:54 compute-0 podman[91891]: 2026-01-23 18:35:54.860374019 +0000 UTC m=+0.041432204 container create 089e8a8aee231b7596fca814676f69ecd1192ba8735be7de33156bf446bbb30b (image=quay.io/ceph/ceph:v20, name=xenodochial_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:54 compute-0 systemd[1]: Started libpod-conmon-089e8a8aee231b7596fca814676f69ecd1192ba8735be7de33156bf446bbb30b.scope.
Jan 23 18:35:54 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52342bfaae5413a3375aa5288492ac2218e637f04a41fea22724481f9dcb347/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52342bfaae5413a3375aa5288492ac2218e637f04a41fea22724481f9dcb347/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52342bfaae5413a3375aa5288492ac2218e637f04a41fea22724481f9dcb347/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:54 compute-0 podman[91891]: 2026-01-23 18:35:54.934664669 +0000 UTC m=+0.115722874 container init 089e8a8aee231b7596fca814676f69ecd1192ba8735be7de33156bf446bbb30b (image=quay.io/ceph/ceph:v20, name=xenodochial_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:54 compute-0 podman[91891]: 2026-01-23 18:35:54.841420369 +0000 UTC m=+0.022478594 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:35:54 compute-0 podman[91891]: 2026-01-23 18:35:54.939794375 +0000 UTC m=+0.120852570 container start 089e8a8aee231b7596fca814676f69ecd1192ba8735be7de33156bf446bbb30b (image=quay.io/ceph/ceph:v20, name=xenodochial_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:35:54 compute-0 podman[91891]: 2026-01-23 18:35:54.943386919 +0000 UTC m=+0.124445114 container attach 089e8a8aee231b7596fca814676f69ecd1192ba8735be7de33156bf446bbb30b (image=quay.io/ceph/ceph:v20, name=xenodochial_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:35:54 compute-0 ceph-mon[75097]: pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 23 18:35:55 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1571339586' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 23 18:35:55 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1571339586' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 23 18:35:55 compute-0 xenodochial_lovelace[91906]: 
Jan 23 18:35:55 compute-0 xenodochial_lovelace[91906]: [global]
Jan 23 18:35:55 compute-0 xenodochial_lovelace[91906]:         fsid = 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:35:55 compute-0 xenodochial_lovelace[91906]:         mon_host = 192.168.122.100
Jan 23 18:35:55 compute-0 xenodochial_lovelace[91906]:         rgw_keystone_api_version = 3
Jan 23 18:35:55 compute-0 systemd[1]: libpod-089e8a8aee231b7596fca814676f69ecd1192ba8735be7de33156bf446bbb30b.scope: Deactivated successfully.
Jan 23 18:35:55 compute-0 podman[91891]: 2026-01-23 18:35:55.383095331 +0000 UTC m=+0.564153546 container died 089e8a8aee231b7596fca814676f69ecd1192ba8735be7de33156bf446bbb30b (image=quay.io/ceph/ceph:v20, name=xenodochial_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:35:55 compute-0 sudo[91931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:35:55 compute-0 sudo[91931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:55 compute-0 sudo[91931]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:55 compute-0 sudo[91968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 23 18:35:55 compute-0 sudo[91968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-d52342bfaae5413a3375aa5288492ac2218e637f04a41fea22724481f9dcb347-merged.mount: Deactivated successfully.
Jan 23 18:35:55 compute-0 podman[91891]: 2026-01-23 18:35:55.752264022 +0000 UTC m=+0.933322237 container remove 089e8a8aee231b7596fca814676f69ecd1192ba8735be7de33156bf446bbb30b (image=quay.io/ceph/ceph:v20, name=xenodochial_lovelace, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 18:35:55 compute-0 systemd[1]: libpod-conmon-089e8a8aee231b7596fca814676f69ecd1192ba8735be7de33156bf446bbb30b.scope: Deactivated successfully.
Jan 23 18:35:55 compute-0 sudo[91888]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:55 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:55 compute-0 podman[92037]: 2026-01-23 18:35:55.930209767 +0000 UTC m=+0.066697981 container exec 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 23 18:35:55 compute-0 sudo[92078]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfozmlmzpuzltxrevovugfxemtymfcdf ; /usr/bin/python3'
Jan 23 18:35:55 compute-0 sudo[92078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:35:55 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1571339586' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 23 18:35:55 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1571339586' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 23 18:35:56 compute-0 podman[92037]: 2026-01-23 18:35:56.048104838 +0000 UTC m=+0.184593032 container exec_died 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:35:56 compute-0 python3[92082]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:35:56 compute-0 podman[92094]: 2026-01-23 18:35:56.139664403 +0000 UTC m=+0.046336794 container create 85d42f39420a38e2cc72de1cac0206674a26e278000e3a7e6040195fdff01792 (image=quay.io/ceph/ceph:v20, name=infallible_dijkstra, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:35:56 compute-0 systemd[1]: Started libpod-conmon-85d42f39420a38e2cc72de1cac0206674a26e278000e3a7e6040195fdff01792.scope.
Jan 23 18:35:56 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f16797d17d04239693a18aae3e81ea3844cae38651a4ae1afd43e26aeb50d5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f16797d17d04239693a18aae3e81ea3844cae38651a4ae1afd43e26aeb50d5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f16797d17d04239693a18aae3e81ea3844cae38651a4ae1afd43e26aeb50d5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:56 compute-0 podman[92094]: 2026-01-23 18:35:56.116093452 +0000 UTC m=+0.022750452 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:35:56 compute-0 podman[92094]: 2026-01-23 18:35:56.228840716 +0000 UTC m=+0.135497726 container init 85d42f39420a38e2cc72de1cac0206674a26e278000e3a7e6040195fdff01792 (image=quay.io/ceph/ceph:v20, name=infallible_dijkstra, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:56 compute-0 podman[92094]: 2026-01-23 18:35:56.235348639 +0000 UTC m=+0.142005619 container start 85d42f39420a38e2cc72de1cac0206674a26e278000e3a7e6040195fdff01792 (image=quay.io/ceph/ceph:v20, name=infallible_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:35:56 compute-0 podman[92094]: 2026-01-23 18:35:56.245200158 +0000 UTC m=+0.151857158 container attach 85d42f39420a38e2cc72de1cac0206674a26e278000e3a7e6040195fdff01792 (image=quay.io/ceph/ceph:v20, name=infallible_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 18:35:56 compute-0 sudo[91968]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:35:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:35:57 compute-0 ceph-mon[75097]: pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:57 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:35:57 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:57 compute-0 sudo[92247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:35:57 compute-0 sudo[92247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:57 compute-0 sudo[92247]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:57 compute-0 sudo[92272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:35:57 compute-0 sudo[92272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:57 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Jan 23 18:35:57 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/332040538' entity='client.admin' 
Jan 23 18:35:57 compute-0 infallible_dijkstra[92136]: set ssl_option
Jan 23 18:35:57 compute-0 systemd[1]: libpod-85d42f39420a38e2cc72de1cac0206674a26e278000e3a7e6040195fdff01792.scope: Deactivated successfully.
Jan 23 18:35:57 compute-0 podman[92094]: 2026-01-23 18:35:57.95339718 +0000 UTC m=+1.860054160 container died 85d42f39420a38e2cc72de1cac0206674a26e278000e3a7e6040195fdff01792 (image=quay.io/ceph/ceph:v20, name=infallible_dijkstra, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 23 18:35:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-78f16797d17d04239693a18aae3e81ea3844cae38651a4ae1afd43e26aeb50d5-merged.mount: Deactivated successfully.
Jan 23 18:35:58 compute-0 podman[92094]: 2026-01-23 18:35:58.001522449 +0000 UTC m=+1.908179439 container remove 85d42f39420a38e2cc72de1cac0206674a26e278000e3a7e6040195fdff01792 (image=quay.io/ceph/ceph:v20, name=infallible_dijkstra, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:58 compute-0 systemd[1]: libpod-conmon-85d42f39420a38e2cc72de1cac0206674a26e278000e3a7e6040195fdff01792.scope: Deactivated successfully.
Jan 23 18:35:58 compute-0 sudo[92078]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:58 compute-0 sudo[92272]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:35:58 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:58 compute-0 sudo[92365]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwlypejdosnqrymcniorpazetpymcaft ; /usr/bin/python3'
Jan 23 18:35:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:35:58 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:35:58 compute-0 sudo[92365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:35:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:35:58 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:35:58 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:35:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:35:58 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:35:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:35:58 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:58 compute-0 sudo[92368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:35:58 compute-0 sudo[92368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:58 compute-0 sudo[92368]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:58 compute-0 sudo[92393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:35:58 compute-0 sudo[92393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:58 compute-0 python3[92367]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:35:58 compute-0 podman[92418]: 2026-01-23 18:35:58.337748951 +0000 UTC m=+0.037003417 container create f2018f30676d555ec8060eb6e80ca2a4aa70634bc852d007e77bb32d27e6b95e (image=quay.io/ceph/ceph:v20, name=festive_goldberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 23 18:35:58 compute-0 systemd[1]: Started libpod-conmon-f2018f30676d555ec8060eb6e80ca2a4aa70634bc852d007e77bb32d27e6b95e.scope.
Jan 23 18:35:58 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb6fb80ed8b9b1b4698f79493a2cf7fa30d249023e7a9313166682665530d1c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb6fb80ed8b9b1b4698f79493a2cf7fa30d249023e7a9313166682665530d1c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb6fb80ed8b9b1b4698f79493a2cf7fa30d249023e7a9313166682665530d1c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:58 compute-0 podman[92418]: 2026-01-23 18:35:58.409910905 +0000 UTC m=+0.109165391 container init f2018f30676d555ec8060eb6e80ca2a4aa70634bc852d007e77bb32d27e6b95e (image=quay.io/ceph/ceph:v20, name=festive_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 23 18:35:58 compute-0 podman[92418]: 2026-01-23 18:35:58.416052067 +0000 UTC m=+0.115306533 container start f2018f30676d555ec8060eb6e80ca2a4aa70634bc852d007e77bb32d27e6b95e (image=quay.io/ceph/ceph:v20, name=festive_goldberg, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:58 compute-0 podman[92418]: 2026-01-23 18:35:58.321290767 +0000 UTC m=+0.020545253 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:35:58 compute-0 podman[92418]: 2026-01-23 18:35:58.420503014 +0000 UTC m=+0.119757490 container attach f2018f30676d555ec8060eb6e80ca2a4aa70634bc852d007e77bb32d27e6b95e (image=quay.io/ceph/ceph:v20, name=festive_goldberg, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 23 18:35:58 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:58 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:58 compute-0 ceph-mon[75097]: pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:58 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/332040538' entity='client.admin' 
Jan 23 18:35:58 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:58 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:35:58 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:58 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:35:58 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:35:58 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:35:58 compute-0 podman[92450]: 2026-01-23 18:35:58.565027908 +0000 UTC m=+0.044131636 container create 8cd49f47c97bd3da9f99f892970ee525eff462fa2e6039fa525b1b0e73449d9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_darwin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:58 compute-0 systemd[1]: Started libpod-conmon-8cd49f47c97bd3da9f99f892970ee525eff462fa2e6039fa525b1b0e73449d9f.scope.
Jan 23 18:35:58 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:58 compute-0 podman[92450]: 2026-01-23 18:35:58.641739721 +0000 UTC m=+0.120843459 container init 8cd49f47c97bd3da9f99f892970ee525eff462fa2e6039fa525b1b0e73449d9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_darwin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:35:58 compute-0 podman[92450]: 2026-01-23 18:35:58.548631545 +0000 UTC m=+0.027735293 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:58 compute-0 podman[92450]: 2026-01-23 18:35:58.648918052 +0000 UTC m=+0.128021780 container start 8cd49f47c97bd3da9f99f892970ee525eff462fa2e6039fa525b1b0e73449d9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 23 18:35:58 compute-0 determined_darwin[92485]: 167 167
Jan 23 18:35:58 compute-0 systemd[1]: libpod-8cd49f47c97bd3da9f99f892970ee525eff462fa2e6039fa525b1b0e73449d9f.scope: Deactivated successfully.
Jan 23 18:35:58 compute-0 podman[92450]: 2026-01-23 18:35:58.65301723 +0000 UTC m=+0.132120978 container attach 8cd49f47c97bd3da9f99f892970ee525eff462fa2e6039fa525b1b0e73449d9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_darwin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 23 18:35:58 compute-0 podman[92450]: 2026-01-23 18:35:58.654055067 +0000 UTC m=+0.133158795 container died 8cd49f47c97bd3da9f99f892970ee525eff462fa2e6039fa525b1b0e73449d9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:35:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca5a5c9ee5a908d5e64e66e4d8c5295724dc3ff898077fd8d55b3acb3ea0e360-merged.mount: Deactivated successfully.
Jan 23 18:35:58 compute-0 podman[92450]: 2026-01-23 18:35:58.694411252 +0000 UTC m=+0.173514980 container remove 8cd49f47c97bd3da9f99f892970ee525eff462fa2e6039fa525b1b0e73449d9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_darwin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 18:35:58 compute-0 systemd[1]: libpod-conmon-8cd49f47c97bd3da9f99f892970ee525eff462fa2e6039fa525b1b0e73449d9f.scope: Deactivated successfully.
Jan 23 18:35:58 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:35:58 compute-0 ceph-mgr[75390]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Jan 23 18:35:58 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 23 18:35:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 23 18:35:58 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:58 compute-0 festive_goldberg[92434]: Scheduled rgw.rgw update...
Jan 23 18:35:58 compute-0 systemd[1]: libpod-f2018f30676d555ec8060eb6e80ca2a4aa70634bc852d007e77bb32d27e6b95e.scope: Deactivated successfully.
Jan 23 18:35:58 compute-0 podman[92418]: 2026-01-23 18:35:58.868970598 +0000 UTC m=+0.568225064 container died f2018f30676d555ec8060eb6e80ca2a4aa70634bc852d007e77bb32d27e6b95e (image=quay.io/ceph/ceph:v20, name=festive_goldberg, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 23 18:35:58 compute-0 podman[92508]: 2026-01-23 18:35:58.882480383 +0000 UTC m=+0.057792755 container create c1bc2081bcedce020367c8ed82912e458f166a7c7c401ada66770ab452956a10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wilbur, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:35:58 compute-0 podman[92418]: 2026-01-23 18:35:58.916656436 +0000 UTC m=+0.615910902 container remove f2018f30676d555ec8060eb6e80ca2a4aa70634bc852d007e77bb32d27e6b95e (image=quay.io/ceph/ceph:v20, name=festive_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 23 18:35:58 compute-0 systemd[1]: Started libpod-conmon-c1bc2081bcedce020367c8ed82912e458f166a7c7c401ada66770ab452956a10.scope.
Jan 23 18:35:58 compute-0 systemd[1]: libpod-conmon-f2018f30676d555ec8060eb6e80ca2a4aa70634bc852d007e77bb32d27e6b95e.scope: Deactivated successfully.
Jan 23 18:35:58 compute-0 sudo[92365]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:58 compute-0 podman[92508]: 2026-01-23 18:35:58.850641244 +0000 UTC m=+0.025953636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:35:58 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82fc0747e752161b86518eecda78d078ad931242bd138ed97942840d8f2d5e8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82fc0747e752161b86518eecda78d078ad931242bd138ed97942840d8f2d5e8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82fc0747e752161b86518eecda78d078ad931242bd138ed97942840d8f2d5e8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82fc0747e752161b86518eecda78d078ad931242bd138ed97942840d8f2d5e8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82fc0747e752161b86518eecda78d078ad931242bd138ed97942840d8f2d5e8f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:35:58 compute-0 podman[92508]: 2026-01-23 18:35:58.964471687 +0000 UTC m=+0.139784079 container init c1bc2081bcedce020367c8ed82912e458f166a7c7c401ada66770ab452956a10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wilbur, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:35:58 compute-0 podman[92508]: 2026-01-23 18:35:58.97066905 +0000 UTC m=+0.145981422 container start c1bc2081bcedce020367c8ed82912e458f166a7c7c401ada66770ab452956a10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:35:58 compute-0 podman[92508]: 2026-01-23 18:35:58.973587728 +0000 UTC m=+0.148900100 container attach c1bc2081bcedce020367c8ed82912e458f166a7c7c401ada66770ab452956a10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wilbur, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:35:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cb6fb80ed8b9b1b4698f79493a2cf7fa30d249023e7a9313166682665530d1c-merged.mount: Deactivated successfully.
Jan 23 18:35:59 compute-0 nostalgic_wilbur[92539]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:35:59 compute-0 nostalgic_wilbur[92539]: --> All data devices are unavailable
Jan 23 18:35:59 compute-0 systemd[1]: libpod-c1bc2081bcedce020367c8ed82912e458f166a7c7c401ada66770ab452956a10.scope: Deactivated successfully.
Jan 23 18:35:59 compute-0 podman[92508]: 2026-01-23 18:35:59.458334978 +0000 UTC m=+0.633647370 container died c1bc2081bcedce020367c8ed82912e458f166a7c7c401ada66770ab452956a10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wilbur, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:35:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-82fc0747e752161b86518eecda78d078ad931242bd138ed97942840d8f2d5e8f-merged.mount: Deactivated successfully.
Jan 23 18:35:59 compute-0 podman[92508]: 2026-01-23 18:35:59.515718853 +0000 UTC m=+0.691031215 container remove c1bc2081bcedce020367c8ed82912e458f166a7c7c401ada66770ab452956a10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:35:59 compute-0 systemd[1]: libpod-conmon-c1bc2081bcedce020367c8ed82912e458f166a7c7c401ada66770ab452956a10.scope: Deactivated successfully.
Jan 23 18:35:59 compute-0 sudo[92393]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:59 compute-0 sudo[92571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:35:59 compute-0 sudo[92571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:59 compute-0 sudo[92571]: pam_unix(sudo:session): session closed for user root
Jan 23 18:35:59 compute-0 sudo[92596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:35:59 compute-0 sudo[92596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:35:59 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:35:59 compute-0 ceph-mon[75097]: from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:35:59 compute-0 ceph-mon[75097]: Saving service rgw.rgw spec with placement compute-0
Jan 23 18:35:59 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:35:59 compute-0 python3[92696]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 18:35:59 compute-0 podman[92709]: 2026-01-23 18:35:59.972723891 +0000 UTC m=+0.053371770 container create 7fb358ed345e3aab6fa99f66ea8232aeedbca02bdec4522edfda2607676179e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 23 18:36:00 compute-0 systemd[1]: Started libpod-conmon-7fb358ed345e3aab6fa99f66ea8232aeedbca02bdec4522edfda2607676179e4.scope.
Jan 23 18:36:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:36:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:36:00 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:36:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:36:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:36:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:36:00 compute-0 podman[92709]: 2026-01-23 18:35:59.953773141 +0000 UTC m=+0.034421020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:00 compute-0 podman[92709]: 2026-01-23 18:36:00.055373271 +0000 UTC m=+0.136021150 container init 7fb358ed345e3aab6fa99f66ea8232aeedbca02bdec4522edfda2607676179e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_archimedes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 18:36:00 compute-0 podman[92709]: 2026-01-23 18:36:00.062252773 +0000 UTC m=+0.142900632 container start 7fb358ed345e3aab6fa99f66ea8232aeedbca02bdec4522edfda2607676179e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 18:36:00 compute-0 podman[92709]: 2026-01-23 18:36:00.066229958 +0000 UTC m=+0.146877817 container attach 7fb358ed345e3aab6fa99f66ea8232aeedbca02bdec4522edfda2607676179e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_archimedes, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 23 18:36:00 compute-0 objective_archimedes[92746]: 167 167
Jan 23 18:36:00 compute-0 systemd[1]: libpod-7fb358ed345e3aab6fa99f66ea8232aeedbca02bdec4522edfda2607676179e4.scope: Deactivated successfully.
Jan 23 18:36:00 compute-0 podman[92709]: 2026-01-23 18:36:00.067750808 +0000 UTC m=+0.148398677 container died 7fb358ed345e3aab6fa99f66ea8232aeedbca02bdec4522edfda2607676179e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_archimedes, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 23 18:36:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-52d796258992fe485380da4fbf6a1bcfb20d1fa58563f756774c9a78b39b5c79-merged.mount: Deactivated successfully.
Jan 23 18:36:00 compute-0 podman[92709]: 2026-01-23 18:36:00.107300011 +0000 UTC m=+0.187947860 container remove 7fb358ed345e3aab6fa99f66ea8232aeedbca02bdec4522edfda2607676179e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 18:36:00 compute-0 systemd[1]: libpod-conmon-7fb358ed345e3aab6fa99f66ea8232aeedbca02bdec4522edfda2607676179e4.scope: Deactivated successfully.
Jan 23 18:36:00 compute-0 podman[92818]: 2026-01-23 18:36:00.260218856 +0000 UTC m=+0.044239758 container create 7808812055c5a2891faa8d0a035de006e3f5e8c0e901ab78717c0c98c3f528c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_swirles, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 18:36:00 compute-0 python3[92812]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769193359.6438127-36610-109117448747523/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:36:00 compute-0 systemd[1]: Started libpod-conmon-7808812055c5a2891faa8d0a035de006e3f5e8c0e901ab78717c0c98c3f528c7.scope.
Jan 23 18:36:00 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939021ddd43b55f484bb1798097b31f2807b6f26a3231606fff00855978ef916/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:00 compute-0 podman[92818]: 2026-01-23 18:36:00.240757083 +0000 UTC m=+0.024778005 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939021ddd43b55f484bb1798097b31f2807b6f26a3231606fff00855978ef916/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939021ddd43b55f484bb1798097b31f2807b6f26a3231606fff00855978ef916/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939021ddd43b55f484bb1798097b31f2807b6f26a3231606fff00855978ef916/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:00 compute-0 podman[92818]: 2026-01-23 18:36:00.352968624 +0000 UTC m=+0.136989546 container init 7808812055c5a2891faa8d0a035de006e3f5e8c0e901ab78717c0c98c3f528c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_swirles, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 18:36:00 compute-0 podman[92818]: 2026-01-23 18:36:00.359951478 +0000 UTC m=+0.143972380 container start 7808812055c5a2891faa8d0a035de006e3f5e8c0e901ab78717c0c98c3f528c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_swirles, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 23 18:36:00 compute-0 podman[92818]: 2026-01-23 18:36:00.363771048 +0000 UTC m=+0.147791980 container attach 7808812055c5a2891faa8d0a035de006e3f5e8c0e901ab78717c0c98c3f528c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_swirles, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:00 compute-0 brave_swirles[92835]: {
Jan 23 18:36:00 compute-0 brave_swirles[92835]:     "0": [
Jan 23 18:36:00 compute-0 brave_swirles[92835]:         {
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "devices": [
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "/dev/loop3"
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             ],
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "lv_name": "ceph_lv0",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "lv_size": "21470642176",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "name": "ceph_lv0",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "tags": {
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.cluster_name": "ceph",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.crush_device_class": "",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.encrypted": "0",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.objectstore": "bluestore",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.osd_id": "0",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.type": "block",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.vdo": "0",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.with_tpm": "0"
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             },
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "type": "block",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "vg_name": "ceph_vg0"
Jan 23 18:36:00 compute-0 brave_swirles[92835]:         }
Jan 23 18:36:00 compute-0 brave_swirles[92835]:     ],
Jan 23 18:36:00 compute-0 brave_swirles[92835]:     "1": [
Jan 23 18:36:00 compute-0 brave_swirles[92835]:         {
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "devices": [
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "/dev/loop4"
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             ],
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "lv_name": "ceph_lv1",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "lv_size": "21470642176",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "name": "ceph_lv1",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "tags": {
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.cluster_name": "ceph",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.crush_device_class": "",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.encrypted": "0",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.objectstore": "bluestore",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.osd_id": "1",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.type": "block",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.vdo": "0",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.with_tpm": "0"
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             },
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "type": "block",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "vg_name": "ceph_vg1"
Jan 23 18:36:00 compute-0 brave_swirles[92835]:         }
Jan 23 18:36:00 compute-0 brave_swirles[92835]:     ],
Jan 23 18:36:00 compute-0 brave_swirles[92835]:     "2": [
Jan 23 18:36:00 compute-0 brave_swirles[92835]:         {
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "devices": [
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "/dev/loop5"
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             ],
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "lv_name": "ceph_lv2",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "lv_size": "21470642176",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "name": "ceph_lv2",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "tags": {
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.cluster_name": "ceph",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.crush_device_class": "",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.encrypted": "0",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.objectstore": "bluestore",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.osd_id": "2",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.type": "block",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.vdo": "0",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:                 "ceph.with_tpm": "0"
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             },
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "type": "block",
Jan 23 18:36:00 compute-0 brave_swirles[92835]:             "vg_name": "ceph_vg2"
Jan 23 18:36:00 compute-0 brave_swirles[92835]:         }
Jan 23 18:36:00 compute-0 brave_swirles[92835]:     ]
Jan 23 18:36:00 compute-0 brave_swirles[92835]: }
Jan 23 18:36:00 compute-0 systemd[1]: libpod-7808812055c5a2891faa8d0a035de006e3f5e8c0e901ab78717c0c98c3f528c7.scope: Deactivated successfully.
Jan 23 18:36:00 compute-0 podman[92818]: 2026-01-23 18:36:00.653978586 +0000 UTC m=+0.437999498 container died 7808812055c5a2891faa8d0a035de006e3f5e8c0e901ab78717c0c98c3f528c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:36:00 compute-0 sudo[92892]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvgayrsnfeqoynbjztwfyybjwgosmcex ; /usr/bin/python3'
Jan 23 18:36:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-939021ddd43b55f484bb1798097b31f2807b6f26a3231606fff00855978ef916-merged.mount: Deactivated successfully.
Jan 23 18:36:00 compute-0 sudo[92892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:00 compute-0 podman[92818]: 2026-01-23 18:36:00.69962377 +0000 UTC m=+0.483644672 container remove 7808812055c5a2891faa8d0a035de006e3f5e8c0e901ab78717c0c98c3f528c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 23 18:36:00 compute-0 systemd[1]: libpod-conmon-7808812055c5a2891faa8d0a035de006e3f5e8c0e901ab78717c0c98c3f528c7.scope: Deactivated successfully.
Jan 23 18:36:00 compute-0 sudo[92596]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:00 compute-0 sudo[92904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:36:00 compute-0 sudo[92904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:00 compute-0 sudo[92904]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:00 compute-0 python3[92903]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:36:00 compute-0 sudo[92929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:36:00 compute-0 sudo[92929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:00 compute-0 podman[92952]: 2026-01-23 18:36:00.868612729 +0000 UTC m=+0.022681629 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:36:01 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:02 compute-0 podman[92952]: 2026-01-23 18:36:02.479091213 +0000 UTC m=+1.633160133 container create 2dd67b2765c252a59cf122df6341d3cbb10937884c333b5e8d6df5c06d061647 (image=quay.io/ceph/ceph:v20, name=objective_liskov, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 23 18:36:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:36:02 compute-0 ceph-mon[75097]: pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:02 compute-0 systemd[1]: Started libpod-conmon-2dd67b2765c252a59cf122df6341d3cbb10937884c333b5e8d6df5c06d061647.scope.
Jan 23 18:36:02 compute-0 podman[92979]: 2026-01-23 18:36:02.738135539 +0000 UTC m=+0.224146986 container create b5569d3c9cb7ef17a3fd2747f942e0fcf9bf46569c120f9bc7c44fa67cf7ef93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hermann, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:02 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/315064d99f7c261e8bf439a731790fa09e4cba07587a91729ee54156b70304db/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/315064d99f7c261e8bf439a731790fa09e4cba07587a91729ee54156b70304db/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/315064d99f7c261e8bf439a731790fa09e4cba07587a91729ee54156b70304db/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:02 compute-0 systemd[1]: Started libpod-conmon-b5569d3c9cb7ef17a3fd2747f942e0fcf9bf46569c120f9bc7c44fa67cf7ef93.scope.
Jan 23 18:36:02 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:02 compute-0 podman[92979]: 2026-01-23 18:36:02.720099983 +0000 UTC m=+0.206111450 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:03 compute-0 podman[92952]: 2026-01-23 18:36:03.171194825 +0000 UTC m=+2.325263725 container init 2dd67b2765c252a59cf122df6341d3cbb10937884c333b5e8d6df5c06d061647 (image=quay.io/ceph/ceph:v20, name=objective_liskov, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:36:03 compute-0 podman[92952]: 2026-01-23 18:36:03.176821894 +0000 UTC m=+2.330890774 container start 2dd67b2765c252a59cf122df6341d3cbb10937884c333b5e8d6df5c06d061647 (image=quay.io/ceph/ceph:v20, name=objective_liskov, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 23 18:36:03 compute-0 podman[92952]: 2026-01-23 18:36:03.618448016 +0000 UTC m=+2.772516936 container attach 2dd67b2765c252a59cf122df6341d3cbb10937884c333b5e8d6df5c06d061647 (image=quay.io/ceph/ceph:v20, name=objective_liskov, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:36:03 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:36:03 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 23 18:36:03 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 23 18:36:03 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 23 18:36:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 23 18:36:03 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 23 18:36:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 23 18:36:03 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 23 18:36:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 23 18:36:03 compute-0 ceph-mon[75097]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 23 18:36:03 compute-0 ceph-mon[75097]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 23 18:36:03 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0[75093]: 2026-01-23T18:36:03.926+0000 7f569a965640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 23 18:36:03 compute-0 ceph-mon[75097]: pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:03 compute-0 podman[92979]: 2026-01-23 18:36:03.936083398 +0000 UTC m=+1.422094885 container init b5569d3c9cb7ef17a3fd2747f942e0fcf9bf46569c120f9bc7c44fa67cf7ef93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hermann, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 18:36:03 compute-0 podman[92979]: 2026-01-23 18:36:03.942728113 +0000 UTC m=+1.428739560 container start b5569d3c9cb7ef17a3fd2747f942e0fcf9bf46569c120f9bc7c44fa67cf7ef93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Jan 23 18:36:03 compute-0 systemd[1]: libpod-b5569d3c9cb7ef17a3fd2747f942e0fcf9bf46569c120f9bc7c44fa67cf7ef93.scope: Deactivated successfully.
Jan 23 18:36:03 compute-0 conmon[93000]: conmon b5569d3c9cb7ef17a3fd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b5569d3c9cb7ef17a3fd2747f942e0fcf9bf46569c120f9bc7c44fa67cf7ef93.scope/container/memory.events
Jan 23 18:36:03 compute-0 beautiful_hermann[93000]: 167 167
Jan 23 18:36:04 compute-0 podman[92979]: 2026-01-23 18:36:04.602305126 +0000 UTC m=+2.088316613 container attach b5569d3c9cb7ef17a3fd2747f942e0fcf9bf46569c120f9bc7c44fa67cf7ef93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hermann, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:36:04 compute-0 podman[92979]: 2026-01-23 18:36:04.602761198 +0000 UTC m=+2.088772655 container died b5569d3c9cb7ef17a3fd2747f942e0fcf9bf46569c120f9bc7c44fa67cf7ef93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hermann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 23 18:36:04 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 23 18:36:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).mds e2 new map
Jan 23 18:36:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2026-01-23T18:36:03:927191+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-23T18:36:03.926949+0000
                                           modified        2026-01-23T18:36:03.926949+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Jan 23 18:36:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Jan 23 18:36:04 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Jan 23 18:36:04 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 23 18:36:04 compute-0 ceph-mgr[75390]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 23 18:36:04 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 23 18:36:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 23 18:36:04 compute-0 ceph-mon[75097]: from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:36:04 compute-0 ceph-mon[75097]: pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 23 18:36:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 23 18:36:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 23 18:36:04 compute-0 ceph-mon[75097]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 23 18:36:04 compute-0 ceph-mon[75097]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 23 18:36:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 23 18:36:04 compute-0 ceph-mon[75097]: osdmap e31: 3 total, 3 up, 3 in
Jan 23 18:36:04 compute-0 ceph-mon[75097]: fsmap cephfs:0
Jan 23 18:36:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5293f6cc5bb812f080923b674f1344b46ab9129652a47c09577cda56285ca43-merged.mount: Deactivated successfully.
Jan 23 18:36:04 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 23 18:36:04 compute-0 systemd[1]: libpod-2dd67b2765c252a59cf122df6341d3cbb10937884c333b5e8d6df5c06d061647.scope: Deactivated successfully.
Jan 23 18:36:05 compute-0 podman[92952]: 2026-01-23 18:36:05.012975642 +0000 UTC m=+4.167044522 container died 2dd67b2765c252a59cf122df6341d3cbb10937884c333b5e8d6df5c06d061647 (image=quay.io/ceph/ceph:v20, name=objective_liskov, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:05 compute-0 podman[92979]: 2026-01-23 18:36:05.055538334 +0000 UTC m=+2.541549781 container remove b5569d3c9cb7ef17a3fd2747f942e0fcf9bf46569c120f9bc7c44fa67cf7ef93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hermann, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:05 compute-0 systemd[1]: libpod-conmon-b5569d3c9cb7ef17a3fd2747f942e0fcf9bf46569c120f9bc7c44fa67cf7ef93.scope: Deactivated successfully.
Jan 23 18:36:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-315064d99f7c261e8bf439a731790fa09e4cba07587a91729ee54156b70304db-merged.mount: Deactivated successfully.
Jan 23 18:36:05 compute-0 podman[93056]: 2026-01-23 18:36:05.205229024 +0000 UTC m=+0.040914370 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:05 compute-0 podman[92952]: 2026-01-23 18:36:05.495618286 +0000 UTC m=+4.649687166 container remove 2dd67b2765c252a59cf122df6341d3cbb10937884c333b5e8d6df5c06d061647 (image=quay.io/ceph/ceph:v20, name=objective_liskov, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 18:36:05 compute-0 systemd[1]: libpod-conmon-2dd67b2765c252a59cf122df6341d3cbb10937884c333b5e8d6df5c06d061647.scope: Deactivated successfully.
Jan 23 18:36:05 compute-0 sudo[92892]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:05 compute-0 sudo[93093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzawxsljuwjtpwnmeurumourioavjghj ; /usr/bin/python3'
Jan 23 18:36:05 compute-0 sudo[93093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:05 compute-0 python3[93095]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:36:05 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:05 compute-0 podman[93056]: 2026-01-23 18:36:05.877926784 +0000 UTC m=+0.713612100 container create b1eba79aebf0cb28661b38307147d77f53634f0832cba49bdd70e57583d66400 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_antonelli, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:05 compute-0 systemd[1]: Started libpod-conmon-b1eba79aebf0cb28661b38307147d77f53634f0832cba49bdd70e57583d66400.scope.
Jan 23 18:36:05 compute-0 podman[93096]: 2026-01-23 18:36:05.929897555 +0000 UTC m=+0.121383134 container create 5f1591d7148d3d17576fb5f3220643d324618c6c356cb0e1c49682d270b4d77a (image=quay.io/ceph/ceph:v20, name=awesome_newton, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:36:05 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19b8931858b67d0f6690f0e49387c8080b43afc96b0363295eaf5b8b3c4b449/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19b8931858b67d0f6690f0e49387c8080b43afc96b0363295eaf5b8b3c4b449/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19b8931858b67d0f6690f0e49387c8080b43afc96b0363295eaf5b8b3c4b449/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19b8931858b67d0f6690f0e49387c8080b43afc96b0363295eaf5b8b3c4b449/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:06 compute-0 podman[93096]: 2026-01-23 18:36:05.908844009 +0000 UTC m=+0.100329608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:36:06 compute-0 ceph-mon[75097]: Saving service mds.cephfs spec with placement compute-0
Jan 23 18:36:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:06 compute-0 systemd[1]: Started libpod-conmon-5f1591d7148d3d17576fb5f3220643d324618c6c356cb0e1c49682d270b4d77a.scope.
Jan 23 18:36:06 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f885bd7be9f5dc1983b4f5b5839b522ee2809f8d7cd3101810ce671250c79a3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f885bd7be9f5dc1983b4f5b5839b522ee2809f8d7cd3101810ce671250c79a3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f885bd7be9f5dc1983b4f5b5839b522ee2809f8d7cd3101810ce671250c79a3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:06 compute-0 podman[93056]: 2026-01-23 18:36:06.831507844 +0000 UTC m=+1.667193220 container init b1eba79aebf0cb28661b38307147d77f53634f0832cba49bdd70e57583d66400 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 23 18:36:06 compute-0 podman[93056]: 2026-01-23 18:36:06.84350386 +0000 UTC m=+1.679189216 container start b1eba79aebf0cb28661b38307147d77f53634f0832cba49bdd70e57583d66400 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 18:36:06 compute-0 podman[93056]: 2026-01-23 18:36:06.855520418 +0000 UTC m=+1.691205744 container attach b1eba79aebf0cb28661b38307147d77f53634f0832cba49bdd70e57583d66400 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 18:36:06 compute-0 podman[93096]: 2026-01-23 18:36:06.943408217 +0000 UTC m=+1.134893816 container init 5f1591d7148d3d17576fb5f3220643d324618c6c356cb0e1c49682d270b4d77a (image=quay.io/ceph/ceph:v20, name=awesome_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 23 18:36:06 compute-0 podman[93096]: 2026-01-23 18:36:06.950321829 +0000 UTC m=+1.141807408 container start 5f1591d7148d3d17576fb5f3220643d324618c6c356cb0e1c49682d270b4d77a (image=quay.io/ceph/ceph:v20, name=awesome_newton, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 18:36:06 compute-0 podman[93096]: 2026-01-23 18:36:06.956094531 +0000 UTC m=+1.147580140 container attach 5f1591d7148d3d17576fb5f3220643d324618c6c356cb0e1c49682d270b4d77a (image=quay.io/ceph/ceph:v20, name=awesome_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:36:07 compute-0 ceph-mon[75097]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:07 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:36:07 compute-0 ceph-mgr[75390]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 23 18:36:07 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 23 18:36:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 23 18:36:07 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:07 compute-0 awesome_newton[93118]: Scheduled mds.cephfs update...
Jan 23 18:36:07 compute-0 systemd[1]: libpod-5f1591d7148d3d17576fb5f3220643d324618c6c356cb0e1c49682d270b4d77a.scope: Deactivated successfully.
Jan 23 18:36:07 compute-0 podman[93096]: 2026-01-23 18:36:07.498841572 +0000 UTC m=+1.690327181 container died 5f1591d7148d3d17576fb5f3220643d324618c6c356cb0e1c49682d270b4d77a (image=quay.io/ceph/ceph:v20, name=awesome_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:36:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f885bd7be9f5dc1983b4f5b5839b522ee2809f8d7cd3101810ce671250c79a3-merged.mount: Deactivated successfully.
Jan 23 18:36:07 compute-0 podman[93096]: 2026-01-23 18:36:07.548905523 +0000 UTC m=+1.740391102 container remove 5f1591d7148d3d17576fb5f3220643d324618c6c356cb0e1c49682d270b4d77a (image=quay.io/ceph/ceph:v20, name=awesome_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:36:07 compute-0 systemd[1]: libpod-conmon-5f1591d7148d3d17576fb5f3220643d324618c6c356cb0e1c49682d270b4d77a.scope: Deactivated successfully.
Jan 23 18:36:07 compute-0 sudo[93093]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:07 compute-0 lvm[93229]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:36:07 compute-0 lvm[93230]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:36:07 compute-0 lvm[93229]: VG ceph_vg0 finished
Jan 23 18:36:07 compute-0 lvm[93230]: VG ceph_vg1 finished
Jan 23 18:36:07 compute-0 lvm[93232]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:36:07 compute-0 lvm[93232]: VG ceph_vg2 finished
Jan 23 18:36:07 compute-0 lvm[93233]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:36:07 compute-0 lvm[93233]: VG ceph_vg0 finished
Jan 23 18:36:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:36:07 compute-0 jovial_antonelli[93111]: {}
Jan 23 18:36:07 compute-0 systemd[1]: libpod-b1eba79aebf0cb28661b38307147d77f53634f0832cba49bdd70e57583d66400.scope: Deactivated successfully.
Jan 23 18:36:07 compute-0 podman[93056]: 2026-01-23 18:36:07.740078558 +0000 UTC m=+2.575763874 container died b1eba79aebf0cb28661b38307147d77f53634f0832cba49bdd70e57583d66400 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 18:36:07 compute-0 systemd[1]: libpod-b1eba79aebf0cb28661b38307147d77f53634f0832cba49bdd70e57583d66400.scope: Consumed 1.455s CPU time.
Jan 23 18:36:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f19b8931858b67d0f6690f0e49387c8080b43afc96b0363295eaf5b8b3c4b449-merged.mount: Deactivated successfully.
Jan 23 18:36:07 compute-0 podman[93056]: 2026-01-23 18:36:07.822921843 +0000 UTC m=+2.658607159 container remove b1eba79aebf0cb28661b38307147d77f53634f0832cba49bdd70e57583d66400 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_antonelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:36:07 compute-0 systemd[1]: libpod-conmon-b1eba79aebf0cb28661b38307147d77f53634f0832cba49bdd70e57583d66400.scope: Deactivated successfully.
Jan 23 18:36:07 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:07 compute-0 sudo[92929]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:36:07 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:36:07 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:07 compute-0 sudo[93250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:36:07 compute-0 sudo[93250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:07 compute-0 sudo[93250]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:08 compute-0 sudo[93275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:36:08 compute-0 sudo[93275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:08 compute-0 sudo[93275]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:08 compute-0 sudo[93300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 23 18:36:08 compute-0 sudo[93300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:08 compute-0 sudo[93459]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvupiqxwjexdkmwdvegkdlmcmoudvblt ; /usr/bin/python3'
Jan 23 18:36:08 compute-0 sudo[93459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:08 compute-0 python3[93461]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 18:36:08 compute-0 sudo[93459]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:09 compute-0 sudo[93532]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xglcrjpuzibmhsjtkjfpohzgxpdgcdce ; /usr/bin/python3'
Jan 23 18:36:09 compute-0 sudo[93532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:09 compute-0 python3[93534]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769193368.514679-36664-114377522653695/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=2d2b8b0157954eb4dc62e722466d3a90ab6a11ed backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:36:09 compute-0 sudo[93532]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:09 compute-0 sudo[93582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujcosbuqkflhoebfsqtmfcbkjquwwyoa ; /usr/bin/python3'
Jan 23 18:36:09 compute-0 sudo[93582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:09 compute-0 python3[93584]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:36:09 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:10 compute-0 ceph-mon[75097]: from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 18:36:10 compute-0 ceph-mon[75097]: Saving service mds.cephfs spec with placement compute-0
Jan 23 18:36:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:10 compute-0 ceph-mon[75097]: pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:10 compute-0 podman[93369]: 2026-01-23 18:36:10.363276702 +0000 UTC m=+1.905818767 container exec 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 18:36:10 compute-0 podman[93369]: 2026-01-23 18:36:10.47008133 +0000 UTC m=+2.012623375 container exec_died 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:10 compute-0 podman[93585]: 2026-01-23 18:36:10.510833717 +0000 UTC m=+0.798430660 container create d1b5faa39a321c6a101f28e648b41dc695b81addef93255aab109b27e193dc59 (image=quay.io/ceph/ceph:v20, name=trusting_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 23 18:36:10 compute-0 podman[93585]: 2026-01-23 18:36:10.473428609 +0000 UTC m=+0.761025562 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:36:10 compute-0 systemd[1]: Started libpod-conmon-d1b5faa39a321c6a101f28e648b41dc695b81addef93255aab109b27e193dc59.scope.
Jan 23 18:36:10 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6849270be76b1f27bdeb87ac557d32a64e7e1baa4f4e8bd3f8e7687e6660ee0a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6849270be76b1f27bdeb87ac557d32a64e7e1baa4f4e8bd3f8e7687e6660ee0a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:10 compute-0 podman[93585]: 2026-01-23 18:36:10.829266168 +0000 UTC m=+1.116863131 container init d1b5faa39a321c6a101f28e648b41dc695b81addef93255aab109b27e193dc59 (image=quay.io/ceph/ceph:v20, name=trusting_fermi, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 18:36:10 compute-0 podman[93585]: 2026-01-23 18:36:10.835527473 +0000 UTC m=+1.123124396 container start d1b5faa39a321c6a101f28e648b41dc695b81addef93255aab109b27e193dc59 (image=quay.io/ceph/ceph:v20, name=trusting_fermi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 18:36:10 compute-0 podman[93585]: 2026-01-23 18:36:10.840353261 +0000 UTC m=+1.127950194 container attach d1b5faa39a321c6a101f28e648b41dc695b81addef93255aab109b27e193dc59 (image=quay.io/ceph/ceph:v20, name=trusting_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 23 18:36:11 compute-0 sudo[93300]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:36:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:36:11 compute-0 ceph-mon[75097]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 23 18:36:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2928320394' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 23 18:36:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:36:11 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:36:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2928320394' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 23 18:36:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:36:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:36:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:36:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:36:11 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:36:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:36:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:36:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:36:11 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:36:11 compute-0 systemd[1]: libpod-d1b5faa39a321c6a101f28e648b41dc695b81addef93255aab109b27e193dc59.scope: Deactivated successfully.
Jan 23 18:36:11 compute-0 podman[93585]: 2026-01-23 18:36:11.38037134 +0000 UTC m=+1.667968283 container died d1b5faa39a321c6a101f28e648b41dc695b81addef93255aab109b27e193dc59 (image=quay.io/ceph/ceph:v20, name=trusting_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 23 18:36:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6849270be76b1f27bdeb87ac557d32a64e7e1baa4f4e8bd3f8e7687e6660ee0a-merged.mount: Deactivated successfully.
Jan 23 18:36:11 compute-0 podman[93585]: 2026-01-23 18:36:11.422488021 +0000 UTC m=+1.710084944 container remove d1b5faa39a321c6a101f28e648b41dc695b81addef93255aab109b27e193dc59 (image=quay.io/ceph/ceph:v20, name=trusting_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 23 18:36:11 compute-0 systemd[1]: libpod-conmon-d1b5faa39a321c6a101f28e648b41dc695b81addef93255aab109b27e193dc59.scope: Deactivated successfully.
Jan 23 18:36:11 compute-0 sudo[93757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:36:11 compute-0 sudo[93757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:11 compute-0 sudo[93757]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:11 compute-0 sudo[93582]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:11 compute-0 sudo[93792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:36:11 compute-0 sudo[93792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:11 compute-0 podman[93829]: 2026-01-23 18:36:11.808350812 +0000 UTC m=+0.058755081 container create acc8c4670e5972a7f6e850a020ec7e3a5ac53a800a5453aec89de089d8421dbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_haslett, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 18:36:11 compute-0 systemd[1]: Started libpod-conmon-acc8c4670e5972a7f6e850a020ec7e3a5ac53a800a5453aec89de089d8421dbe.scope.
Jan 23 18:36:11 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:11 compute-0 podman[93829]: 2026-01-23 18:36:11.768809119 +0000 UTC m=+0.019213368 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:11 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:11 compute-0 podman[93829]: 2026-01-23 18:36:11.897523205 +0000 UTC m=+0.147927494 container init acc8c4670e5972a7f6e850a020ec7e3a5ac53a800a5453aec89de089d8421dbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_haslett, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:11 compute-0 podman[93829]: 2026-01-23 18:36:11.908428713 +0000 UTC m=+0.158832982 container start acc8c4670e5972a7f6e850a020ec7e3a5ac53a800a5453aec89de089d8421dbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:11 compute-0 podman[93829]: 2026-01-23 18:36:11.913519257 +0000 UTC m=+0.163923526 container attach acc8c4670e5972a7f6e850a020ec7e3a5ac53a800a5453aec89de089d8421dbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_haslett, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 18:36:11 compute-0 dreamy_haslett[93845]: 167 167
Jan 23 18:36:11 compute-0 systemd[1]: libpod-acc8c4670e5972a7f6e850a020ec7e3a5ac53a800a5453aec89de089d8421dbe.scope: Deactivated successfully.
Jan 23 18:36:11 compute-0 conmon[93845]: conmon acc8c4670e5972a7f6e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-acc8c4670e5972a7f6e850a020ec7e3a5ac53a800a5453aec89de089d8421dbe.scope/container/memory.events
Jan 23 18:36:11 compute-0 podman[93829]: 2026-01-23 18:36:11.917334238 +0000 UTC m=+0.167738517 container died acc8c4670e5972a7f6e850a020ec7e3a5ac53a800a5453aec89de089d8421dbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-199d7fd4f9e4c9ee419af0396932cbecaff6c1624bbb9035e03d02d7099b4dcf-merged.mount: Deactivated successfully.
Jan 23 18:36:11 compute-0 podman[93829]: 2026-01-23 18:36:11.95989949 +0000 UTC m=+0.210303729 container remove acc8c4670e5972a7f6e850a020ec7e3a5ac53a800a5453aec89de089d8421dbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_haslett, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:36:11 compute-0 systemd[1]: libpod-conmon-acc8c4670e5972a7f6e850a020ec7e3a5ac53a800a5453aec89de089d8421dbe.scope: Deactivated successfully.
Jan 23 18:36:12 compute-0 sudo[93897]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbqghoscmhkgefrvjeqswdbgvfbatgxs ; /usr/bin/python3'
Jan 23 18:36:12 compute-0 sudo[93897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:12 compute-0 podman[93876]: 2026-01-23 18:36:12.134904328 +0000 UTC m=+0.048811528 container create ec8973e68bf41ffd1d9220883983f8a32540446fc53f6d0b42551b6bfc605c34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 23 18:36:12 compute-0 systemd[1]: Started libpod-conmon-ec8973e68bf41ffd1d9220883983f8a32540446fc53f6d0b42551b6bfc605c34.scope.
Jan 23 18:36:12 compute-0 podman[93876]: 2026-01-23 18:36:12.108971324 +0000 UTC m=+0.022878514 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:12 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbdb5e5063c133b771aa2a7b405eeb5ca0e97580dad13c59f1afe1bdd7c722f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbdb5e5063c133b771aa2a7b405eeb5ca0e97580dad13c59f1afe1bdd7c722f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbdb5e5063c133b771aa2a7b405eeb5ca0e97580dad13c59f1afe1bdd7c722f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbdb5e5063c133b771aa2a7b405eeb5ca0e97580dad13c59f1afe1bdd7c722f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbdb5e5063c133b771aa2a7b405eeb5ca0e97580dad13c59f1afe1bdd7c722f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:12 compute-0 python3[93906]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:36:12 compute-0 podman[93876]: 2026-01-23 18:36:12.556658387 +0000 UTC m=+0.470565587 container init ec8973e68bf41ffd1d9220883983f8a32540446fc53f6d0b42551b6bfc605c34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:12 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2928320394' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 23 18:36:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:36:12 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2928320394' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 23 18:36:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:36:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:36:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:36:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:36:12 compute-0 ceph-mon[75097]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:12 compute-0 podman[93876]: 2026-01-23 18:36:12.568743145 +0000 UTC m=+0.482650315 container start ec8973e68bf41ffd1d9220883983f8a32540446fc53f6d0b42551b6bfc605c34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:36:12 compute-0 podman[93876]: 2026-01-23 18:36:12.573308816 +0000 UTC m=+0.487216016 container attach ec8973e68bf41ffd1d9220883983f8a32540446fc53f6d0b42551b6bfc605c34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:36:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:36:12 compute-0 podman[93913]: 2026-01-23 18:36:12.630008772 +0000 UTC m=+0.357342129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:36:12 compute-0 podman[93913]: 2026-01-23 18:36:12.969818148 +0000 UTC m=+0.697151505 container create 8eb2b2864c495aa39225334b7deddaa6e540638296b56928b2aa124e2d8a5f15 (image=quay.io/ceph/ceph:v20, name=pensive_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 18:36:13 compute-0 systemd[1]: Started libpod-conmon-8eb2b2864c495aa39225334b7deddaa6e540638296b56928b2aa124e2d8a5f15.scope.
Jan 23 18:36:13 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/171a005c8c179552815759daa38f930f5aa959d2f52f170fc3809abd2a0d2974/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/171a005c8c179552815759daa38f930f5aa959d2f52f170fc3809abd2a0d2974/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:13 compute-0 sad_wozniak[93909]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:36:13 compute-0 sad_wozniak[93909]: --> All data devices are unavailable
Jan 23 18:36:13 compute-0 systemd[1]: libpod-ec8973e68bf41ffd1d9220883983f8a32540446fc53f6d0b42551b6bfc605c34.scope: Deactivated successfully.
Jan 23 18:36:13 compute-0 podman[93913]: 2026-01-23 18:36:13.283186956 +0000 UTC m=+1.010520344 container init 8eb2b2864c495aa39225334b7deddaa6e540638296b56928b2aa124e2d8a5f15 (image=quay.io/ceph/ceph:v20, name=pensive_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 23 18:36:13 compute-0 podman[93913]: 2026-01-23 18:36:13.289548614 +0000 UTC m=+1.016881931 container start 8eb2b2864c495aa39225334b7deddaa6e540638296b56928b2aa124e2d8a5f15 (image=quay.io/ceph/ceph:v20, name=pensive_tesla, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 18:36:13 compute-0 podman[93913]: 2026-01-23 18:36:13.296105217 +0000 UTC m=+1.023438624 container attach 8eb2b2864c495aa39225334b7deddaa6e540638296b56928b2aa124e2d8a5f15 (image=quay.io/ceph/ceph:v20, name=pensive_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 18:36:13 compute-0 podman[93876]: 2026-01-23 18:36:13.306370307 +0000 UTC m=+1.220277497 container died ec8973e68bf41ffd1d9220883983f8a32540446fc53f6d0b42551b6bfc605c34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 18:36:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbdb5e5063c133b771aa2a7b405eeb5ca0e97580dad13c59f1afe1bdd7c722f0-merged.mount: Deactivated successfully.
Jan 23 18:36:13 compute-0 podman[93876]: 2026-01-23 18:36:13.445062147 +0000 UTC m=+1.358969317 container remove ec8973e68bf41ffd1d9220883983f8a32540446fc53f6d0b42551b6bfc605c34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 23 18:36:13 compute-0 sudo[93792]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:13 compute-0 sudo[93981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:36:13 compute-0 sudo[93981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:13 compute-0 sudo[93981]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:13 compute-0 systemd[1]: libpod-conmon-ec8973e68bf41ffd1d9220883983f8a32540446fc53f6d0b42551b6bfc605c34.scope: Deactivated successfully.
Jan 23 18:36:13 compute-0 sudo[94006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:36:13 compute-0 sudo[94006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 23 18:36:13 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2261723638' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 23 18:36:13 compute-0 pensive_tesla[93943]: 
Jan 23 18:36:13 compute-0 pensive_tesla[93943]: {"fsid":"4da2adac-d096-5ead-9ec5-3108259ba9e4","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":127,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":3,"osd_up_since":1769193338,"num_in_osds":3,"osd_in_since":1769193312,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83849216,"bytes_avail":64328077312,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-01-23T18:36:03:927191+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-23T18:35:30.861225+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 23 18:36:13 compute-0 systemd[1]: libpod-8eb2b2864c495aa39225334b7deddaa6e540638296b56928b2aa124e2d8a5f15.scope: Deactivated successfully.
Jan 23 18:36:13 compute-0 podman[93913]: 2026-01-23 18:36:13.837372348 +0000 UTC m=+1.564705695 container died 8eb2b2864c495aa39225334b7deddaa6e540638296b56928b2aa124e2d8a5f15 (image=quay.io/ceph/ceph:v20, name=pensive_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 23 18:36:13 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:13 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2261723638' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 23 18:36:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-171a005c8c179552815759daa38f930f5aa959d2f52f170fc3809abd2a0d2974-merged.mount: Deactivated successfully.
Jan 23 18:36:13 compute-0 podman[93913]: 2026-01-23 18:36:13.927365513 +0000 UTC m=+1.654698830 container remove 8eb2b2864c495aa39225334b7deddaa6e540638296b56928b2aa124e2d8a5f15 (image=quay.io/ceph/ceph:v20, name=pensive_tesla, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 18:36:13 compute-0 podman[94046]: 2026-01-23 18:36:13.937690695 +0000 UTC m=+0.086737309 container create 53a0d2bb2ee911c22fe72ab1078b3deeab5c3ef9e48f4db9904e82634ad78dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_boyd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 23 18:36:13 compute-0 systemd[1]: libpod-conmon-8eb2b2864c495aa39225334b7deddaa6e540638296b56928b2aa124e2d8a5f15.scope: Deactivated successfully.
Jan 23 18:36:13 compute-0 sudo[93897]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:13 compute-0 systemd[1]: Started libpod-conmon-53a0d2bb2ee911c22fe72ab1078b3deeab5c3ef9e48f4db9904e82634ad78dae.scope.
Jan 23 18:36:13 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:14 compute-0 podman[94046]: 2026-01-23 18:36:13.907582631 +0000 UTC m=+0.056629275 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:14 compute-0 podman[94046]: 2026-01-23 18:36:14.03298181 +0000 UTC m=+0.182028434 container init 53a0d2bb2ee911c22fe72ab1078b3deeab5c3ef9e48f4db9904e82634ad78dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_boyd, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:36:14 compute-0 podman[94046]: 2026-01-23 18:36:14.042277225 +0000 UTC m=+0.191323839 container start 53a0d2bb2ee911c22fe72ab1078b3deeab5c3ef9e48f4db9904e82634ad78dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 18:36:14 compute-0 intelligent_boyd[94073]: 167 167
Jan 23 18:36:14 compute-0 systemd[1]: libpod-53a0d2bb2ee911c22fe72ab1078b3deeab5c3ef9e48f4db9904e82634ad78dae.scope: Deactivated successfully.
Jan 23 18:36:14 compute-0 podman[94046]: 2026-01-23 18:36:14.046752923 +0000 UTC m=+0.195799597 container attach 53a0d2bb2ee911c22fe72ab1078b3deeab5c3ef9e48f4db9904e82634ad78dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_boyd, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:14 compute-0 podman[94046]: 2026-01-23 18:36:14.048083958 +0000 UTC m=+0.197130572 container died 53a0d2bb2ee911c22fe72ab1078b3deeab5c3ef9e48f4db9904e82634ad78dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1a617e4aed0b161d5b0dab3c307e190a4764b8d13e60efd2b27251ba0e12c69-merged.mount: Deactivated successfully.
Jan 23 18:36:14 compute-0 sudo[94111]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thdkiywihahedmrlyqjxnxxctmhhdoci ; /usr/bin/python3'
Jan 23 18:36:14 compute-0 sudo[94111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:14 compute-0 podman[94046]: 2026-01-23 18:36:14.133915833 +0000 UTC m=+0.282962447 container remove 53a0d2bb2ee911c22fe72ab1078b3deeab5c3ef9e48f4db9904e82634ad78dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_boyd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 18:36:14 compute-0 systemd[1]: libpod-conmon-53a0d2bb2ee911c22fe72ab1078b3deeab5c3ef9e48f4db9904e82634ad78dae.scope: Deactivated successfully.
Jan 23 18:36:14 compute-0 python3[94115]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:36:14 compute-0 podman[94126]: 2026-01-23 18:36:14.282645937 +0000 UTC m=+0.027328012 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:14 compute-0 podman[94128]: 2026-01-23 18:36:14.28808853 +0000 UTC m=+0.022773731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:36:14 compute-0 podman[94126]: 2026-01-23 18:36:14.618417766 +0000 UTC m=+0.363099821 container create da5b86427f0488e74011750244db54017189171e1cc5b96796931ae5380f9263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 18:36:14 compute-0 podman[94128]: 2026-01-23 18:36:14.659643225 +0000 UTC m=+0.394328406 container create c287e02a6d0fde996456ff625ed5af78653fe32dac76f93a25701da56fcf5fd9 (image=quay.io/ceph/ceph:v20, name=adoring_wing, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:36:14 compute-0 systemd[1]: Started libpod-conmon-da5b86427f0488e74011750244db54017189171e1cc5b96796931ae5380f9263.scope.
Jan 23 18:36:14 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55852103c867ea4cf0953268ef92796c3274d5dcbd5d3ed3f4684791e389159b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55852103c867ea4cf0953268ef92796c3274d5dcbd5d3ed3f4684791e389159b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55852103c867ea4cf0953268ef92796c3274d5dcbd5d3ed3f4684791e389159b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55852103c867ea4cf0953268ef92796c3274d5dcbd5d3ed3f4684791e389159b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:14 compute-0 systemd[1]: Started libpod-conmon-c287e02a6d0fde996456ff625ed5af78653fe32dac76f93a25701da56fcf5fd9.scope.
Jan 23 18:36:14 compute-0 podman[94126]: 2026-01-23 18:36:14.810001701 +0000 UTC m=+0.554683786 container init da5b86427f0488e74011750244db54017189171e1cc5b96796931ae5380f9263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_robinson, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:36:14 compute-0 podman[94126]: 2026-01-23 18:36:14.816847642 +0000 UTC m=+0.561529697 container start da5b86427f0488e74011750244db54017189171e1cc5b96796931ae5380f9263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:36:14 compute-0 podman[94126]: 2026-01-23 18:36:14.823498328 +0000 UTC m=+0.568180413 container attach da5b86427f0488e74011750244db54017189171e1cc5b96796931ae5380f9263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_robinson, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:14 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/853f5423b9264d057de69e24242650578df40474d1ea9f159f1715fb17adf94c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/853f5423b9264d057de69e24242650578df40474d1ea9f159f1715fb17adf94c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:14 compute-0 podman[94128]: 2026-01-23 18:36:14.848790025 +0000 UTC m=+0.583475226 container init c287e02a6d0fde996456ff625ed5af78653fe32dac76f93a25701da56fcf5fd9 (image=quay.io/ceph/ceph:v20, name=adoring_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 18:36:14 compute-0 podman[94128]: 2026-01-23 18:36:14.854554017 +0000 UTC m=+0.589239198 container start c287e02a6d0fde996456ff625ed5af78653fe32dac76f93a25701da56fcf5fd9 (image=quay.io/ceph/ceph:v20, name=adoring_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 18:36:14 compute-0 podman[94128]: 2026-01-23 18:36:14.952223694 +0000 UTC m=+0.686908875 container attach c287e02a6d0fde996456ff625ed5af78653fe32dac76f93a25701da56fcf5fd9 (image=quay.io/ceph/ceph:v20, name=adoring_wing, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Jan 23 18:36:15 compute-0 angry_robinson[94155]: {
Jan 23 18:36:15 compute-0 angry_robinson[94155]:     "0": [
Jan 23 18:36:15 compute-0 angry_robinson[94155]:         {
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "devices": [
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "/dev/loop3"
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             ],
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "lv_name": "ceph_lv0",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "lv_size": "21470642176",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "name": "ceph_lv0",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "tags": {
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.cluster_name": "ceph",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.crush_device_class": "",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.encrypted": "0",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.objectstore": "bluestore",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.osd_id": "0",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.type": "block",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.vdo": "0",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.with_tpm": "0"
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             },
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "type": "block",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "vg_name": "ceph_vg0"
Jan 23 18:36:15 compute-0 angry_robinson[94155]:         }
Jan 23 18:36:15 compute-0 angry_robinson[94155]:     ],
Jan 23 18:36:15 compute-0 angry_robinson[94155]:     "1": [
Jan 23 18:36:15 compute-0 angry_robinson[94155]:         {
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "devices": [
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "/dev/loop4"
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             ],
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "lv_name": "ceph_lv1",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "lv_size": "21470642176",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "name": "ceph_lv1",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "tags": {
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.cluster_name": "ceph",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.crush_device_class": "",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.encrypted": "0",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.objectstore": "bluestore",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.osd_id": "1",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.type": "block",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.vdo": "0",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.with_tpm": "0"
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             },
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "type": "block",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "vg_name": "ceph_vg1"
Jan 23 18:36:15 compute-0 angry_robinson[94155]:         }
Jan 23 18:36:15 compute-0 angry_robinson[94155]:     ],
Jan 23 18:36:15 compute-0 angry_robinson[94155]:     "2": [
Jan 23 18:36:15 compute-0 angry_robinson[94155]:         {
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "devices": [
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "/dev/loop5"
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             ],
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "lv_name": "ceph_lv2",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "lv_size": "21470642176",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "name": "ceph_lv2",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "tags": {
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.cluster_name": "ceph",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.crush_device_class": "",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.encrypted": "0",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.objectstore": "bluestore",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.osd_id": "2",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.type": "block",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.vdo": "0",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:                 "ceph.with_tpm": "0"
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             },
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "type": "block",
Jan 23 18:36:15 compute-0 angry_robinson[94155]:             "vg_name": "ceph_vg2"
Jan 23 18:36:15 compute-0 angry_robinson[94155]:         }
Jan 23 18:36:15 compute-0 angry_robinson[94155]:     ]
Jan 23 18:36:15 compute-0 angry_robinson[94155]: }
Jan 23 18:36:15 compute-0 systemd[1]: libpod-da5b86427f0488e74011750244db54017189171e1cc5b96796931ae5380f9263.scope: Deactivated successfully.
Jan 23 18:36:15 compute-0 podman[94126]: 2026-01-23 18:36:15.127961061 +0000 UTC m=+0.872643116 container died da5b86427f0488e74011750244db54017189171e1cc5b96796931ae5380f9263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:36:15 compute-0 ceph-mon[75097]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-55852103c867ea4cf0953268ef92796c3274d5dcbd5d3ed3f4684791e389159b-merged.mount: Deactivated successfully.
Jan 23 18:36:15 compute-0 podman[94126]: 2026-01-23 18:36:15.233463655 +0000 UTC m=+0.978145710 container remove da5b86427f0488e74011750244db54017189171e1cc5b96796931ae5380f9263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_robinson, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 18:36:15 compute-0 systemd[1]: libpod-conmon-da5b86427f0488e74011750244db54017189171e1cc5b96796931ae5380f9263.scope: Deactivated successfully.
Jan 23 18:36:15 compute-0 sudo[94006]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:15 compute-0 sudo[94202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:36:15 compute-0 sudo[94202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:15 compute-0 sudo[94202]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:15 compute-0 sudo[94227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:36:15 compute-0 sudo[94227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 18:36:15 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2618200869' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 18:36:15 compute-0 adoring_wing[94160]: 
Jan 23 18:36:15 compute-0 adoring_wing[94160]: {"epoch":1,"fsid":"4da2adac-d096-5ead-9ec5-3108259ba9e4","modified":"2026-01-23T18:34:01.802781Z","created":"2026-01-23T18:34:01.802781Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Jan 23 18:36:15 compute-0 adoring_wing[94160]: dumped monmap epoch 1
Jan 23 18:36:15 compute-0 systemd[1]: libpod-c287e02a6d0fde996456ff625ed5af78653fe32dac76f93a25701da56fcf5fd9.scope: Deactivated successfully.
Jan 23 18:36:15 compute-0 podman[94128]: 2026-01-23 18:36:15.615914686 +0000 UTC m=+1.350599877 container died c287e02a6d0fde996456ff625ed5af78653fe32dac76f93a25701da56fcf5fd9 (image=quay.io/ceph/ceph:v20, name=adoring_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 23 18:36:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-853f5423b9264d057de69e24242650578df40474d1ea9f159f1715fb17adf94c-merged.mount: Deactivated successfully.
Jan 23 18:36:15 compute-0 podman[94128]: 2026-01-23 18:36:15.755504439 +0000 UTC m=+1.490189620 container remove c287e02a6d0fde996456ff625ed5af78653fe32dac76f93a25701da56fcf5fd9 (image=quay.io/ceph/ceph:v20, name=adoring_wing, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:36:15 compute-0 systemd[1]: libpod-conmon-c287e02a6d0fde996456ff625ed5af78653fe32dac76f93a25701da56fcf5fd9.scope: Deactivated successfully.
Jan 23 18:36:15 compute-0 podman[94266]: 2026-01-23 18:36:15.763309515 +0000 UTC m=+0.145196962 container create 8b6d4c16e113a00fe82da31fe355e41b0b7f3092fa6efa19ed7aef2d6ec54208 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_wiles, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:36:15 compute-0 sudo[94111]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:15 compute-0 podman[94266]: 2026-01-23 18:36:15.740652547 +0000 UTC m=+0.122540004 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:15 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:15 compute-0 systemd[1]: Started libpod-conmon-8b6d4c16e113a00fe82da31fe355e41b0b7f3092fa6efa19ed7aef2d6ec54208.scope.
Jan 23 18:36:15 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:15 compute-0 podman[94266]: 2026-01-23 18:36:15.919850945 +0000 UTC m=+0.301738412 container init 8b6d4c16e113a00fe82da31fe355e41b0b7f3092fa6efa19ed7aef2d6ec54208 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_wiles, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 18:36:15 compute-0 podman[94266]: 2026-01-23 18:36:15.926945533 +0000 UTC m=+0.308832980 container start 8b6d4c16e113a00fe82da31fe355e41b0b7f3092fa6efa19ed7aef2d6ec54208 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_wiles, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 23 18:36:15 compute-0 podman[94266]: 2026-01-23 18:36:15.931105832 +0000 UTC m=+0.312993269 container attach 8b6d4c16e113a00fe82da31fe355e41b0b7f3092fa6efa19ed7aef2d6ec54208 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_wiles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 23 18:36:15 compute-0 awesome_wiles[94294]: 167 167
Jan 23 18:36:15 compute-0 systemd[1]: libpod-8b6d4c16e113a00fe82da31fe355e41b0b7f3092fa6efa19ed7aef2d6ec54208.scope: Deactivated successfully.
Jan 23 18:36:15 compute-0 podman[94266]: 2026-01-23 18:36:15.933083244 +0000 UTC m=+0.314970691 container died 8b6d4c16e113a00fe82da31fe355e41b0b7f3092fa6efa19ed7aef2d6ec54208 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_wiles, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 23 18:36:16 compute-0 sudo[94335]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmzzfmbznxnbxqwxxdscbwskgfoxrhlw ; /usr/bin/python3'
Jan 23 18:36:16 compute-0 sudo[94335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:16 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2618200869' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 18:36:16 compute-0 ceph-mon[75097]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-251aac2b8bd9a020377dd41144a7cebeed3171f6d465128f6c929f7f6af54c37-merged.mount: Deactivated successfully.
Jan 23 18:36:16 compute-0 python3[94337]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:36:16 compute-0 podman[94266]: 2026-01-23 18:36:16.341584833 +0000 UTC m=+0.723472270 container remove 8b6d4c16e113a00fe82da31fe355e41b0b7f3092fa6efa19ed7aef2d6ec54208 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 23 18:36:16 compute-0 podman[94338]: 2026-01-23 18:36:16.367232849 +0000 UTC m=+0.035842046 container create 29f86af44d6a54d0c927bf26296064eedeacb8d7f7e5664ba1acda95159c11bd (image=quay.io/ceph/ceph:v20, name=tender_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:36:16 compute-0 systemd[1]: Started libpod-conmon-29f86af44d6a54d0c927bf26296064eedeacb8d7f7e5664ba1acda95159c11bd.scope.
Jan 23 18:36:16 compute-0 systemd[1]: libpod-conmon-8b6d4c16e113a00fe82da31fe355e41b0b7f3092fa6efa19ed7aef2d6ec54208.scope: Deactivated successfully.
Jan 23 18:36:16 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c895b46fbe6d9140127930606e7285d439f39065c5a60a06fb0ea577bdb2f7d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c895b46fbe6d9140127930606e7285d439f39065c5a60a06fb0ea577bdb2f7d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:16 compute-0 podman[94338]: 2026-01-23 18:36:16.350485377 +0000 UTC m=+0.019094594 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:36:16 compute-0 podman[94338]: 2026-01-23 18:36:16.458835657 +0000 UTC m=+0.127444884 container init 29f86af44d6a54d0c927bf26296064eedeacb8d7f7e5664ba1acda95159c11bd (image=quay.io/ceph/ceph:v20, name=tender_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:36:16 compute-0 podman[94338]: 2026-01-23 18:36:16.464682271 +0000 UTC m=+0.133291478 container start 29f86af44d6a54d0c927bf26296064eedeacb8d7f7e5664ba1acda95159c11bd (image=quay.io/ceph/ceph:v20, name=tender_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 18:36:16 compute-0 podman[94338]: 2026-01-23 18:36:16.468841981 +0000 UTC m=+0.137451178 container attach 29f86af44d6a54d0c927bf26296064eedeacb8d7f7e5664ba1acda95159c11bd (image=quay.io/ceph/ceph:v20, name=tender_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:36:16 compute-0 podman[94364]: 2026-01-23 18:36:16.497719672 +0000 UTC m=+0.033887895 container create de1efddd41b9d2587f3045e5232cc4668864487a484aa5c84153efd2a31464ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:16 compute-0 systemd[1]: Started libpod-conmon-de1efddd41b9d2587f3045e5232cc4668864487a484aa5c84153efd2a31464ff.scope.
Jan 23 18:36:16 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3edc06b3690c8b5d9d5e393930a4cf8884f083c824aeaa7f96d36b562c331e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:16 compute-0 podman[94364]: 2026-01-23 18:36:16.482640435 +0000 UTC m=+0.018808678 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3edc06b3690c8b5d9d5e393930a4cf8884f083c824aeaa7f96d36b562c331e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3edc06b3690c8b5d9d5e393930a4cf8884f083c824aeaa7f96d36b562c331e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3edc06b3690c8b5d9d5e393930a4cf8884f083c824aeaa7f96d36b562c331e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:16 compute-0 podman[94364]: 2026-01-23 18:36:16.594197198 +0000 UTC m=+0.130365441 container init de1efddd41b9d2587f3045e5232cc4668864487a484aa5c84153efd2a31464ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:16 compute-0 podman[94364]: 2026-01-23 18:36:16.612417268 +0000 UTC m=+0.148585481 container start de1efddd41b9d2587f3045e5232cc4668864487a484aa5c84153efd2a31464ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cannon, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:36:16 compute-0 podman[94364]: 2026-01-23 18:36:16.616598699 +0000 UTC m=+0.152766932 container attach de1efddd41b9d2587f3045e5232cc4668864487a484aa5c84153efd2a31464ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cannon, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 18:36:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 23 18:36:16 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2328743671' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 23 18:36:16 compute-0 tender_tharp[94356]: [client.openstack]
Jan 23 18:36:17 compute-0 tender_tharp[94356]:         key = AQDrvnNpAAAAABAAuDzh3RYW8bPoDkiVx+RkoQ==
Jan 23 18:36:17 compute-0 tender_tharp[94356]:         caps mgr = "allow *"
Jan 23 18:36:17 compute-0 tender_tharp[94356]:         caps mon = "profile rbd"
Jan 23 18:36:17 compute-0 tender_tharp[94356]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 23 18:36:17 compute-0 systemd[1]: libpod-29f86af44d6a54d0c927bf26296064eedeacb8d7f7e5664ba1acda95159c11bd.scope: Deactivated successfully.
Jan 23 18:36:17 compute-0 podman[94437]: 2026-01-23 18:36:17.075398445 +0000 UTC m=+0.035336053 container died 29f86af44d6a54d0c927bf26296064eedeacb8d7f7e5664ba1acda95159c11bd (image=quay.io/ceph/ceph:v20, name=tender_tharp, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 18:36:17 compute-0 lvm[94489]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:36:17 compute-0 lvm[94489]: VG ceph_vg0 finished
Jan 23 18:36:17 compute-0 lvm[94492]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:36:17 compute-0 lvm[94492]: VG ceph_vg1 finished
Jan 23 18:36:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c895b46fbe6d9140127930606e7285d439f39065c5a60a06fb0ea577bdb2f7d-merged.mount: Deactivated successfully.
Jan 23 18:36:17 compute-0 lvm[94498]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:36:17 compute-0 lvm[94498]: VG ceph_vg2 finished
Jan 23 18:36:17 compute-0 lvm[94499]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:36:17 compute-0 lvm[94499]: VG ceph_vg0 finished
Jan 23 18:36:17 compute-0 lvm[94500]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:36:17 compute-0 lvm[94500]: VG ceph_vg1 finished
Jan 23 18:36:17 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2328743671' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 23 18:36:17 compute-0 podman[94437]: 2026-01-23 18:36:17.365712565 +0000 UTC m=+0.325650173 container remove 29f86af44d6a54d0c927bf26296064eedeacb8d7f7e5664ba1acda95159c11bd (image=quay.io/ceph/ceph:v20, name=tender_tharp, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:36:17 compute-0 systemd[1]: libpod-conmon-29f86af44d6a54d0c927bf26296064eedeacb8d7f7e5664ba1acda95159c11bd.scope: Deactivated successfully.
Jan 23 18:36:17 compute-0 sudo[94335]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:17 compute-0 tender_cannon[94382]: {}
Jan 23 18:36:17 compute-0 systemd[1]: libpod-de1efddd41b9d2587f3045e5232cc4668864487a484aa5c84153efd2a31464ff.scope: Deactivated successfully.
Jan 23 18:36:17 compute-0 systemd[1]: libpod-de1efddd41b9d2587f3045e5232cc4668864487a484aa5c84153efd2a31464ff.scope: Consumed 1.323s CPU time.
Jan 23 18:36:17 compute-0 podman[94364]: 2026-01-23 18:36:17.434311605 +0000 UTC m=+0.970479828 container died de1efddd41b9d2587f3045e5232cc4668864487a484aa5c84153efd2a31464ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 18:36:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:36:17 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:18 compute-0 ceph-mon[75097]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3edc06b3690c8b5d9d5e393930a4cf8884f083c824aeaa7f96d36b562c331e4-merged.mount: Deactivated successfully.
Jan 23 18:36:18 compute-0 podman[94364]: 2026-01-23 18:36:18.486672402 +0000 UTC m=+2.022840635 container remove de1efddd41b9d2587f3045e5232cc4668864487a484aa5c84153efd2a31464ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 23 18:36:18 compute-0 systemd[1]: libpod-conmon-de1efddd41b9d2587f3045e5232cc4668864487a484aa5c84153efd2a31464ff.scope: Deactivated successfully.
Jan 23 18:36:18 compute-0 sudo[94227]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:36:18 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:36:18 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:18 compute-0 ceph-mgr[75390]: [progress INFO root] update: starting ev 42066935-e1db-44f9-aa1b-0f1ae6a26399 (Updating rgw.rgw deployment (+1 -> 1))
Jan 23 18:36:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kehgrd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 23 18:36:18 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kehgrd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 23 18:36:18 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kehgrd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 18:36:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 23 18:36:18 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:36:18 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:36:18 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.kehgrd on compute-0
Jan 23 18:36:18 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.kehgrd on compute-0
Jan 23 18:36:18 compute-0 sudo[94638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:36:18 compute-0 sudo[94685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vycvsuykzdjiboerwsgcrhxhnsksbqht ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769193378.347255-36736-34421120745572/async_wrapper.py j840006577354 30 /home/zuul/.ansible/tmp/ansible-tmp-1769193378.347255-36736-34421120745572/AnsiballZ_command.py _'
Jan 23 18:36:18 compute-0 sudo[94638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:18 compute-0 sudo[94685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:18 compute-0 sudo[94638]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:18 compute-0 sudo[94690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:36:18 compute-0 sudo[94690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:18 compute-0 ansible-async_wrapper.py[94688]: Invoked with j840006577354 30 /home/zuul/.ansible/tmp/ansible-tmp-1769193378.347255-36736-34421120745572/AnsiballZ_command.py _
Jan 23 18:36:18 compute-0 ansible-async_wrapper.py[94717]: Starting module and watcher
Jan 23 18:36:18 compute-0 ansible-async_wrapper.py[94717]: Start watching 94718 (30)
Jan 23 18:36:18 compute-0 ansible-async_wrapper.py[94718]: Start module (94718)
Jan 23 18:36:18 compute-0 ansible-async_wrapper.py[94688]: Return async_wrapper task started.
Jan 23 18:36:18 compute-0 sudo[94685]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:19 compute-0 python3[94719]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:36:19 compute-0 podman[94732]: 2026-01-23 18:36:19.051123475 +0000 UTC m=+0.030793473 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:36:19 compute-0 podman[94732]: 2026-01-23 18:36:19.264719961 +0000 UTC m=+0.244389929 container create f3e503f2fb45edf0edec484544b48fc815b9d2a2018878a475493290245d4b0d (image=quay.io/ceph/ceph:v20, name=clever_goldstine, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 18:36:19 compute-0 systemd[1]: Started libpod-conmon-f3e503f2fb45edf0edec484544b48fc815b9d2a2018878a475493290245d4b0d.scope.
Jan 23 18:36:19 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d2c072b074956863d6411639b7a28a6ba08a50e1d3488f9ad4323a612091eb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d2c072b074956863d6411639b7a28a6ba08a50e1d3488f9ad4323a612091eb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:19 compute-0 podman[94732]: 2026-01-23 18:36:19.369065874 +0000 UTC m=+0.348735882 container init f3e503f2fb45edf0edec484544b48fc815b9d2a2018878a475493290245d4b0d (image=quay.io/ceph/ceph:v20, name=clever_goldstine, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:19 compute-0 podman[94732]: 2026-01-23 18:36:19.378611376 +0000 UTC m=+0.358281354 container start f3e503f2fb45edf0edec484544b48fc815b9d2a2018878a475493290245d4b0d (image=quay.io/ceph/ceph:v20, name=clever_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 23 18:36:19 compute-0 podman[94732]: 2026-01-23 18:36:19.383942957 +0000 UTC m=+0.363612945 container attach f3e503f2fb45edf0edec484544b48fc815b9d2a2018878a475493290245d4b0d (image=quay.io/ceph/ceph:v20, name=clever_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 23 18:36:19 compute-0 podman[94780]: 2026-01-23 18:36:19.504612331 +0000 UTC m=+0.096671282 container create 091d3266e0268bd955aeb67f74c47020f07ee3e0cd402d6b55ea60b462900a6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:36:19 compute-0 podman[94780]: 2026-01-23 18:36:19.429755756 +0000 UTC m=+0.021815107 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:19 compute-0 systemd[1]: Started libpod-conmon-091d3266e0268bd955aeb67f74c47020f07ee3e0cd402d6b55ea60b462900a6a.scope.
Jan 23 18:36:19 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:19 compute-0 podman[94780]: 2026-01-23 18:36:19.722049318 +0000 UTC m=+0.314108289 container init 091d3266e0268bd955aeb67f74c47020f07ee3e0cd402d6b55ea60b462900a6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_yonath, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 23 18:36:19 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:19 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:19 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kehgrd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 23 18:36:19 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kehgrd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 18:36:19 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:19 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:36:19 compute-0 ceph-mon[75097]: Deploying daemon rgw.rgw.compute-0.kehgrd on compute-0
Jan 23 18:36:19 compute-0 podman[94780]: 2026-01-23 18:36:19.734233369 +0000 UTC m=+0.326292330 container start 091d3266e0268bd955aeb67f74c47020f07ee3e0cd402d6b55ea60b462900a6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_yonath, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 18:36:19 compute-0 quirky_yonath[94815]: 167 167
Jan 23 18:36:19 compute-0 systemd[1]: libpod-091d3266e0268bd955aeb67f74c47020f07ee3e0cd402d6b55ea60b462900a6a.scope: Deactivated successfully.
Jan 23 18:36:19 compute-0 podman[94780]: 2026-01-23 18:36:19.738489762 +0000 UTC m=+0.330548743 container attach 091d3266e0268bd955aeb67f74c47020f07ee3e0cd402d6b55ea60b462900a6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:19 compute-0 conmon[94815]: conmon 091d3266e0268bd955ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-091d3266e0268bd955aeb67f74c47020f07ee3e0cd402d6b55ea60b462900a6a.scope/container/memory.events
Jan 23 18:36:19 compute-0 podman[94780]: 2026-01-23 18:36:19.741244424 +0000 UTC m=+0.333303375 container died 091d3266e0268bd955aeb67f74c47020f07ee3e0cd402d6b55ea60b462900a6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_yonath, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 23 18:36:19 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 18:36:19 compute-0 clever_goldstine[94765]: 
Jan 23 18:36:19 compute-0 clever_goldstine[94765]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 23 18:36:19 compute-0 systemd[1]: libpod-f3e503f2fb45edf0edec484544b48fc815b9d2a2018878a475493290245d4b0d.scope: Deactivated successfully.
Jan 23 18:36:19 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4efc3ef661ebda5c431fa628f6562fb6d96a02e341e151693dc9cf30d9bb9a2-merged.mount: Deactivated successfully.
Jan 23 18:36:20 compute-0 sudo[94892]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjmgehawfpxdwulwnriiqttmwnwadcqe ; /usr/bin/python3'
Jan 23 18:36:20 compute-0 sudo[94892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:20 compute-0 podman[94780]: 2026-01-23 18:36:20.053416041 +0000 UTC m=+0.645475002 container remove 091d3266e0268bd955aeb67f74c47020f07ee3e0cd402d6b55ea60b462900a6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_yonath, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:20 compute-0 systemd[1]: libpod-conmon-091d3266e0268bd955aeb67f74c47020f07ee3e0cd402d6b55ea60b462900a6a.scope: Deactivated successfully.
Jan 23 18:36:20 compute-0 podman[94732]: 2026-01-23 18:36:20.149811655 +0000 UTC m=+1.129481633 container died f3e503f2fb45edf0edec484544b48fc815b9d2a2018878a475493290245d4b0d (image=quay.io/ceph/ceph:v20, name=clever_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 18:36:20 compute-0 python3[94894]: ansible-ansible.legacy.async_status Invoked with jid=j840006577354.94688 mode=status _async_dir=/root/.ansible_async
Jan 23 18:36:20 compute-0 sudo[94892]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-73d2c072b074956863d6411639b7a28a6ba08a50e1d3488f9ad4323a612091eb-merged.mount: Deactivated successfully.
Jan 23 18:36:20 compute-0 systemd[1]: Reloading.
Jan 23 18:36:20 compute-0 podman[94834]: 2026-01-23 18:36:20.641140679 +0000 UTC m=+0.808464213 container remove f3e503f2fb45edf0edec484544b48fc815b9d2a2018878a475493290245d4b0d (image=quay.io/ceph/ceph:v20, name=clever_goldstine, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:36:20 compute-0 ansible-async_wrapper.py[94718]: Module complete (94718)
Jan 23 18:36:20 compute-0 systemd-rc-local-generator[94921]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:36:20 compute-0 systemd-sysv-generator[94924]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:36:20 compute-0 ceph-mon[75097]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 18:36:20 compute-0 ceph-mon[75097]: pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:20 compute-0 systemd[1]: libpod-conmon-f3e503f2fb45edf0edec484544b48fc815b9d2a2018878a475493290245d4b0d.scope: Deactivated successfully.
Jan 23 18:36:20 compute-0 systemd[1]: Reloading.
Jan 23 18:36:21 compute-0 systemd-sysv-generator[94972]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:36:21 compute-0 systemd-rc-local-generator[94968]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:36:21 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.kehgrd for 4da2adac-d096-5ead-9ec5-3108259ba9e4...
Jan 23 18:36:21 compute-0 sudo[95024]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmonsfflpxlbulvznxexduswimkwtdnp ; /usr/bin/python3'
Jan 23 18:36:21 compute-0 sudo[95024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:21 compute-0 python3[95028]: ansible-ansible.legacy.async_status Invoked with jid=j840006577354.94688 mode=status _async_dir=/root/.ansible_async
Jan 23 18:36:21 compute-0 sudo[95024]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:21 compute-0 sudo[95132]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhdfyvnqxhuytsknsnyecnlkyxqckxbd ; /usr/bin/python3'
Jan 23 18:36:21 compute-0 sudo[95132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:21 compute-0 podman[95073]: 2026-01-23 18:36:21.534156932 +0000 UTC m=+0.021710944 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:21 compute-0 python3[95134]: ansible-ansible.legacy.async_status Invoked with jid=j840006577354.94688 mode=cleanup _async_dir=/root/.ansible_async
Jan 23 18:36:21 compute-0 sudo[95132]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:21 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:21 compute-0 podman[95073]: 2026-01-23 18:36:21.903827504 +0000 UTC m=+0.391381486 container create 8a05cee4200cb35bf9dd526c3e06d1d439cb18a273444a2be046bbab5068efbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-rgw-rgw-compute-0-kehgrd, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a304d01fded9670402bac35ede5354c0f4b09490144464ecf667b2e70c3c521/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a304d01fded9670402bac35ede5354c0f4b09490144464ecf667b2e70c3c521/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a304d01fded9670402bac35ede5354c0f4b09490144464ecf667b2e70c3c521/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a304d01fded9670402bac35ede5354c0f4b09490144464ecf667b2e70c3c521/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.kehgrd supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:22 compute-0 podman[95073]: 2026-01-23 18:36:22.15438679 +0000 UTC m=+0.641940792 container init 8a05cee4200cb35bf9dd526c3e06d1d439cb18a273444a2be046bbab5068efbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-rgw-rgw-compute-0-kehgrd, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:36:22 compute-0 podman[95073]: 2026-01-23 18:36:22.161479375 +0000 UTC m=+0.649033347 container start 8a05cee4200cb35bf9dd526c3e06d1d439cb18a273444a2be046bbab5068efbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-rgw-rgw-compute-0-kehgrd, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:22 compute-0 sudo[95166]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwhnjxmwicyazofvnqlauidczunrucfn ; /usr/bin/python3'
Jan 23 18:36:22 compute-0 sudo[95166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:22 compute-0 radosgw[95149]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 23 18:36:22 compute-0 radosgw[95149]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Jan 23 18:36:22 compute-0 radosgw[95149]: framework: beast
Jan 23 18:36:22 compute-0 radosgw[95149]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 23 18:36:22 compute-0 radosgw[95149]: init_numa not setting numa affinity
Jan 23 18:36:22 compute-0 bash[95073]: 8a05cee4200cb35bf9dd526c3e06d1d439cb18a273444a2be046bbab5068efbe
Jan 23 18:36:22 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.kehgrd for 4da2adac-d096-5ead-9ec5-3108259ba9e4.
Jan 23 18:36:22 compute-0 python3[95176]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:36:22 compute-0 sudo[94690]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:36:22 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:36:22 compute-0 podman[95197]: 2026-01-23 18:36:22.442670595 +0000 UTC m=+0.081633003 container create 171d56a12372c5f290c2eab7181cecece911697ee0d615853ce97dcae89da72c (image=quay.io/ceph/ceph:v20, name=bold_wing, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:22 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 23 18:36:22 compute-0 podman[95197]: 2026-01-23 18:36:22.392476238 +0000 UTC m=+0.031438676 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:36:22 compute-0 systemd[1]: Started libpod-conmon-171d56a12372c5f290c2eab7181cecece911697ee0d615853ce97dcae89da72c.scope.
Jan 23 18:36:22 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:22 compute-0 ceph-mgr[75390]: [progress INFO root] complete: finished ev 42066935-e1db-44f9-aa1b-0f1ae6a26399 (Updating rgw.rgw deployment (+1 -> 1))
Jan 23 18:36:22 compute-0 ceph-mgr[75390]: [progress INFO root] Completed event 42066935-e1db-44f9-aa1b-0f1ae6a26399 (Updating rgw.rgw deployment (+1 -> 1)) in 4 seconds
Jan 23 18:36:22 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Jan 23 18:36:22 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 23 18:36:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 23 18:36:22 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 23 18:36:22 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb14540bb0cb7c4f0c8d88749559054e02f8c6393642fdb7f02cfc80b637ec55/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb14540bb0cb7c4f0c8d88749559054e02f8c6393642fdb7f02cfc80b637ec55/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:22 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:22 compute-0 ceph-mgr[75390]: [progress INFO root] update: starting ev a673b0cc-0879-47a8-a8bc-9eb06083da07 (Updating mds.cephfs deployment (+1 -> 1))
Jan 23 18:36:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.kgapua", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 23 18:36:22 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.kgapua", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 23 18:36:22 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.kgapua", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 23 18:36:22 compute-0 podman[95197]: 2026-01-23 18:36:22.598461345 +0000 UTC m=+0.237423833 container init 171d56a12372c5f290c2eab7181cecece911697ee0d615853ce97dcae89da72c (image=quay.io/ceph/ceph:v20, name=bold_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 18:36:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:36:22 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:36:22 compute-0 ceph-mgr[75390]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.kgapua on compute-0
Jan 23 18:36:22 compute-0 ceph-mgr[75390]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.kgapua on compute-0
Jan 23 18:36:22 compute-0 podman[95197]: 2026-01-23 18:36:22.611169619 +0000 UTC m=+0.250132067 container start 171d56a12372c5f290c2eab7181cecece911697ee0d615853ce97dcae89da72c (image=quay.io/ceph/ceph:v20, name=bold_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:36:22 compute-0 podman[95197]: 2026-01-23 18:36:22.633676113 +0000 UTC m=+0.272638541 container attach 171d56a12372c5f290c2eab7181cecece911697ee0d615853ce97dcae89da72c (image=quay.io/ceph/ceph:v20, name=bold_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:36:22 compute-0 sudo[95217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:36:22 compute-0 sudo[95217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:36:22 compute-0 sudo[95217]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:22 compute-0 sudo[95242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4
Jan 23 18:36:22 compute-0 sudo[95242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:23 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14253 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 18:36:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 23 18:36:23 compute-0 bold_wing[95213]: 
Jan 23 18:36:23 compute-0 bold_wing[95213]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 23 18:36:23 compute-0 ceph-mon[75097]: pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:23 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:23 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:23 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:23 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:23 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:23 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.kgapua", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 23 18:36:23 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.kgapua", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 23 18:36:23 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:36:23 compute-0 systemd[1]: libpod-171d56a12372c5f290c2eab7181cecece911697ee0d615853ce97dcae89da72c.scope: Deactivated successfully.
Jan 23 18:36:23 compute-0 podman[95197]: 2026-01-23 18:36:23.097776792 +0000 UTC m=+0.736739240 container died 171d56a12372c5f290c2eab7181cecece911697ee0d615853ce97dcae89da72c (image=quay.io/ceph/ceph:v20, name=bold_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 23 18:36:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 23 18:36:23 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 23 18:36:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Jan 23 18:36:23 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1526984234' entity='client.rgw.rgw.compute-0.kehgrd' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 23 18:36:23 compute-0 podman[95324]: 2026-01-23 18:36:23.096977163 +0000 UTC m=+0.034037141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb14540bb0cb7c4f0c8d88749559054e02f8c6393642fdb7f02cfc80b637ec55-merged.mount: Deactivated successfully.
Jan 23 18:36:23 compute-0 podman[95197]: 2026-01-23 18:36:23.323878845 +0000 UTC m=+0.962841253 container remove 171d56a12372c5f290c2eab7181cecece911697ee0d615853ce97dcae89da72c (image=quay.io/ceph/ceph:v20, name=bold_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 23 18:36:23 compute-0 sudo[95166]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:23 compute-0 podman[95324]: 2026-01-23 18:36:23.448944518 +0000 UTC m=+0.386004406 container create f5eab50cf81fcd0978b5a603956d061fe7bb36b67cb4a465de8027698717c4cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_newton, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 23 18:36:23 compute-0 systemd[1]: libpod-conmon-171d56a12372c5f290c2eab7181cecece911697ee0d615853ce97dcae89da72c.scope: Deactivated successfully.
Jan 23 18:36:23 compute-0 systemd[1]: Started libpod-conmon-f5eab50cf81fcd0978b5a603956d061fe7bb36b67cb4a465de8027698717c4cc.scope.
Jan 23 18:36:23 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 32 pg[8.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:23 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:23 compute-0 podman[95324]: 2026-01-23 18:36:23.788331983 +0000 UTC m=+0.725391901 container init f5eab50cf81fcd0978b5a603956d061fe7bb36b67cb4a465de8027698717c4cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Jan 23 18:36:23 compute-0 podman[95324]: 2026-01-23 18:36:23.793515881 +0000 UTC m=+0.730575769 container start f5eab50cf81fcd0978b5a603956d061fe7bb36b67cb4a465de8027698717c4cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_newton, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 23 18:36:23 compute-0 hungry_newton[95357]: 167 167
Jan 23 18:36:23 compute-0 systemd[1]: libpod-f5eab50cf81fcd0978b5a603956d061fe7bb36b67cb4a465de8027698717c4cc.scope: Deactivated successfully.
Jan 23 18:36:23 compute-0 ansible-async_wrapper.py[94717]: Done in kid B.
Jan 23 18:36:23 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v80: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:23 compute-0 podman[95324]: 2026-01-23 18:36:23.972660686 +0000 UTC m=+0.909720604 container attach f5eab50cf81fcd0978b5a603956d061fe7bb36b67cb4a465de8027698717c4cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_newton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 18:36:23 compute-0 podman[95324]: 2026-01-23 18:36:23.973283072 +0000 UTC m=+0.910342960 container died f5eab50cf81fcd0978b5a603956d061fe7bb36b67cb4a465de8027698717c4cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_newton, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:36:24 compute-0 sudo[95397]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvpizfjoneltjevfsqbfnhkpvawoogbc ; /usr/bin/python3'
Jan 23 18:36:24 compute-0 sudo[95397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc46dd6e2f252f2a7505aad0b4fd98720cec23267553d1cb2bc28286b1131652-merged.mount: Deactivated successfully.
Jan 23 18:36:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 23 18:36:24 compute-0 python3[95399]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:36:24 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1526984234' entity='client.rgw.rgw.compute-0.kehgrd' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 23 18:36:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 23 18:36:24 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 23 18:36:24 compute-0 podman[95324]: 2026-01-23 18:36:24.866610031 +0000 UTC m=+1.803669919 container remove f5eab50cf81fcd0978b5a603956d061fe7bb36b67cb4a465de8027698717c4cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_newton, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 18:36:24 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 33 pg[8.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:24 compute-0 ceph-mon[75097]: Saving service rgw.rgw spec with placement compute-0
Jan 23 18:36:24 compute-0 ceph-mon[75097]: Deploying daemon mds.cephfs.compute-0.kgapua on compute-0
Jan 23 18:36:24 compute-0 ceph-mon[75097]: from='client.14253 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 18:36:24 compute-0 ceph-mon[75097]: osdmap e32: 3 total, 3 up, 3 in
Jan 23 18:36:24 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1526984234' entity='client.rgw.rgw.compute-0.kehgrd' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 23 18:36:24 compute-0 podman[95401]: 2026-01-23 18:36:24.910547423 +0000 UTC m=+0.685494457 container create 66684557a37d48796f9ab467ff8d0f8b641fd9c68d7e6b2682994c8b865db11d (image=quay.io/ceph/ceph:v20, name=sad_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 18:36:24 compute-0 systemd[1]: Started libpod-conmon-66684557a37d48796f9ab467ff8d0f8b641fd9c68d7e6b2682994c8b865db11d.scope.
Jan 23 18:36:24 compute-0 podman[95401]: 2026-01-23 18:36:24.878384301 +0000 UTC m=+0.653331425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:36:24 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:24 compute-0 systemd[1]: libpod-conmon-f5eab50cf81fcd0978b5a603956d061fe7bb36b67cb4a465de8027698717c4cc.scope: Deactivated successfully.
Jan 23 18:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c5a008c791caf5b85829e4b5194ff5989254f84db44800c9ac6223f611fa49d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c5a008c791caf5b85829e4b5194ff5989254f84db44800c9ac6223f611fa49d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:25 compute-0 podman[95401]: 2026-01-23 18:36:25.025359583 +0000 UTC m=+0.800306657 container init 66684557a37d48796f9ab467ff8d0f8b641fd9c68d7e6b2682994c8b865db11d (image=quay.io/ceph/ceph:v20, name=sad_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:36:25 compute-0 podman[95401]: 2026-01-23 18:36:25.032632082 +0000 UTC m=+0.807579126 container start 66684557a37d48796f9ab467ff8d0f8b641fd9c68d7e6b2682994c8b865db11d (image=quay.io/ceph/ceph:v20, name=sad_kapitsa, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:36:25 compute-0 systemd[1]: Reloading.
Jan 23 18:36:25 compute-0 podman[95401]: 2026-01-23 18:36:25.037255677 +0000 UTC m=+0.812202731 container attach 66684557a37d48796f9ab467ff8d0f8b641fd9c68d7e6b2682994c8b865db11d (image=quay.io/ceph/ceph:v20, name=sad_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 23 18:36:25 compute-0 ceph-mgr[75390]: [progress INFO root] Writing back 4 completed events
Jan 23 18:36:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 18:36:25 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:25 compute-0 ceph-mgr[75390]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Jan 23 18:36:25 compute-0 systemd-rc-local-generator[96012]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:36:25 compute-0 systemd-sysv-generator[96015]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:36:25 compute-0 systemd[1]: Reloading.
Jan 23 18:36:25 compute-0 systemd-sysv-generator[96071]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:36:25 compute-0 systemd-rc-local-generator[96067]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:36:25 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 18:36:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} v 0)
Jan 23 18:36:25 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} : dispatch
Jan 23 18:36:25 compute-0 sad_kapitsa[95933]: 
Jan 23 18:36:25 compute-0 sad_kapitsa[95933]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Jan 23 18:36:25 compute-0 podman[95401]: 2026-01-23 18:36:25.528965627 +0000 UTC m=+1.303912681 container died 66684557a37d48796f9ab467ff8d0f8b641fd9c68d7e6b2682994c8b865db11d (image=quay.io/ceph/ceph:v20, name=sad_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:36:25 compute-0 systemd[1]: libpod-66684557a37d48796f9ab467ff8d0f8b641fd9c68d7e6b2682994c8b865db11d.scope: Deactivated successfully.
Jan 23 18:36:25 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.kgapua for 4da2adac-d096-5ead-9ec5-3108259ba9e4...
Jan 23 18:36:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c5a008c791caf5b85829e4b5194ff5989254f84db44800c9ac6223f611fa49d-merged.mount: Deactivated successfully.
Jan 23 18:36:25 compute-0 podman[95401]: 2026-01-23 18:36:25.694552538 +0000 UTC m=+1.469499572 container remove 66684557a37d48796f9ab467ff8d0f8b641fd9c68d7e6b2682994c8b865db11d (image=quay.io/ceph/ceph:v20, name=sad_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:36:25 compute-0 systemd[1]: libpod-conmon-66684557a37d48796f9ab467ff8d0f8b641fd9c68d7e6b2682994c8b865db11d.scope: Deactivated successfully.
Jan 23 18:36:25 compute-0 sudo[95397]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 23 18:36:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 23 18:36:25 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 23 18:36:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 23 18:36:25 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3120534073' entity='client.rgw.rgw.compute-0.kehgrd' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 23 18:36:25 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v83: 9 pgs: 1 unknown, 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:25 compute-0 podman[96143]: 2026-01-23 18:36:25.878663165 +0000 UTC m=+0.049146542 container create 61db3caa0d9bd83b9b87bd90f2fe7224b9ef92c4087adee491660fdd4e3f33b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mds-cephfs-compute-0-kgapua, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:36:25 compute-0 ceph-mon[75097]: pgmap v80: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:25 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1526984234' entity='client.rgw.rgw.compute-0.kehgrd' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 23 18:36:25 compute-0 ceph-mon[75097]: osdmap e33: 3 total, 3 up, 3 in
Jan 23 18:36:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} : dispatch
Jan 23 18:36:25 compute-0 ceph-mon[75097]: osdmap e34: 3 total, 3 up, 3 in
Jan 23 18:36:25 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3120534073' entity='client.rgw.rgw.compute-0.kehgrd' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 23 18:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3baf163ec864e086efee4849e2f4066a6e605911897a22d716773ed45ee1edd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3baf163ec864e086efee4849e2f4066a6e605911897a22d716773ed45ee1edd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3baf163ec864e086efee4849e2f4066a6e605911897a22d716773ed45ee1edd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3baf163ec864e086efee4849e2f4066a6e605911897a22d716773ed45ee1edd7/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.kgapua supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:25 compute-0 podman[96143]: 2026-01-23 18:36:25.851287181 +0000 UTC m=+0.021770558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:25 compute-0 podman[96143]: 2026-01-23 18:36:25.997178847 +0000 UTC m=+0.167662204 container init 61db3caa0d9bd83b9b87bd90f2fe7224b9ef92c4087adee491660fdd4e3f33b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mds-cephfs-compute-0-kgapua, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:36:26 compute-0 podman[96143]: 2026-01-23 18:36:26.002081698 +0000 UTC m=+0.172565035 container start 61db3caa0d9bd83b9b87bd90f2fe7224b9ef92c4087adee491660fdd4e3f33b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mds-cephfs-compute-0-kgapua, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 18:36:26 compute-0 bash[96143]: 61db3caa0d9bd83b9b87bd90f2fe7224b9ef92c4087adee491660fdd4e3f33b1
Jan 23 18:36:26 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.kgapua for 4da2adac-d096-5ead-9ec5-3108259ba9e4.
Jan 23 18:36:26 compute-0 ceph-mds[96162]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 18:36:26 compute-0 ceph-mds[96162]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Jan 23 18:36:26 compute-0 ceph-mds[96162]: main not setting numa affinity
Jan 23 18:36:26 compute-0 ceph-mds[96162]: pidfile_write: ignore empty --pid-file
Jan 23 18:36:26 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mds-cephfs-compute-0-kgapua[96158]: starting mds.cephfs.compute-0.kgapua at 
Jan 23 18:36:26 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua Updating MDS map to version 2 from mon.0
Jan 23 18:36:26 compute-0 sudo[95242]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:36:26 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:36:26 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 34 pg[9.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:26 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 23 18:36:26 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:26 compute-0 ceph-mgr[75390]: [progress INFO root] complete: finished ev a673b0cc-0879-47a8-a8bc-9eb06083da07 (Updating mds.cephfs deployment (+1 -> 1))
Jan 23 18:36:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 23 18:36:26 compute-0 ceph-mgr[75390]: [progress INFO root] Completed event a673b0cc-0879-47a8-a8bc-9eb06083da07 (Updating mds.cephfs deployment (+1 -> 1)) in 4 seconds
Jan 23 18:36:26 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 23 18:36:26 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:26 compute-0 sudo[96181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:36:26 compute-0 sudo[96181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:26 compute-0 sudo[96181]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:26 compute-0 sudo[96206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:36:26 compute-0 sudo[96206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:26 compute-0 sudo[96206]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:26 compute-0 sudo[96231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 23 18:36:26 compute-0 sudo[96231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:26 compute-0 sudo[96279]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmeywvdnyxpjntrvfhgyghzrbqdpltbz ; /usr/bin/python3'
Jan 23 18:36:26 compute-0 sudo[96279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:26 compute-0 python3[96281]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:36:26 compute-0 podman[96317]: 2026-01-23 18:36:26.760012659 +0000 UTC m=+0.042817957 container create 75a8e48e1c7cad5b2c58648a6aa902b520d5c46cc272fe165647e2cf93fcc942 (image=quay.io/ceph/ceph:v20, name=tender_babbage, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:26 compute-0 systemd[1]: Started libpod-conmon-75a8e48e1c7cad5b2c58648a6aa902b520d5c46cc272fe165647e2cf93fcc942.scope.
Jan 23 18:36:26 compute-0 podman[96331]: 2026-01-23 18:36:26.822964481 +0000 UTC m=+0.074373784 container exec 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 23 18:36:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 23 18:36:26 compute-0 podman[96317]: 2026-01-23 18:36:26.742234161 +0000 UTC m=+0.025039489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:36:26 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3de5f149aeac0d14e09def726bb02384ce378bd302566d0090cd8c4c2ab40ac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3de5f149aeac0d14e09def726bb02384ce378bd302566d0090cd8c4c2ab40ac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:27 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3120534073' entity='client.rgw.rgw.compute-0.kehgrd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 23 18:36:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 23 18:36:27 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 23 18:36:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).mds e3 new map
Jan 23 18:36:27 compute-0 ceph-mon[75097]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 18:36:27 compute-0 ceph-mon[75097]: pgmap v83: 9 pgs: 1 unknown, 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2026-01-23T18:36:27:140295+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-23T18:36:03.926949+0000
                                           modified        2026-01-23T18:36:03.926949+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.kgapua{-1:14260} state up:standby seq 1 addr [v2:192.168.122.100:6814/2685709666,v1:192.168.122.100:6815/2685709666] compat {c=[1],r=[1],i=[1fff]}]
Jan 23 18:36:27 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua Updating MDS map to version 3 from mon.0
Jan 23 18:36:27 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 35 pg[9.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:27 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua Monitors have assigned me to become a standby
Jan 23 18:36:27 compute-0 podman[96331]: 2026-01-23 18:36:27.172101006 +0000 UTC m=+0.423510299 container exec_died 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 23 18:36:27 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2685709666,v1:192.168.122.100:6815/2685709666] up:boot
Jan 23 18:36:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/2685709666,v1:192.168.122.100:6815/2685709666] as mds.0
Jan 23 18:36:27 compute-0 ceph-mon[75097]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.kgapua assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 23 18:36:27 compute-0 ceph-mon[75097]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 23 18:36:27 compute-0 ceph-mon[75097]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 23 18:36:27 compute-0 ceph-mon[75097]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 23 18:36:27 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 23 18:36:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.kgapua"} v 0)
Jan 23 18:36:27 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.kgapua"} : dispatch
Jan 23 18:36:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).mds e3 all = 0
Jan 23 18:36:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).mds e4 new map
Jan 23 18:36:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2026-01-23T18:36:27:175237+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-23T18:36:03.926949+0000
                                           modified        2026-01-23T18:36:27.175231+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14260}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.kgapua{0:14260} state up:creating seq 1 addr [v2:192.168.122.100:6814/2685709666,v1:192.168.122.100:6815/2685709666] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 23 18:36:27 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua Updating MDS map to version 4 from mon.0
Jan 23 18:36:27 compute-0 ceph-mds[96162]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 23 18:36:27 compute-0 ceph-mds[96162]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Jan 23 18:36:27 compute-0 ceph-mds[96162]: mds.0.cache creating system inode with ino:0x1
Jan 23 18:36:27 compute-0 ceph-mds[96162]: mds.0.cache creating system inode with ino:0x100
Jan 23 18:36:27 compute-0 ceph-mds[96162]: mds.0.cache creating system inode with ino:0x600
Jan 23 18:36:27 compute-0 ceph-mds[96162]: mds.0.cache creating system inode with ino:0x601
Jan 23 18:36:27 compute-0 ceph-mds[96162]: mds.0.cache creating system inode with ino:0x602
Jan 23 18:36:27 compute-0 ceph-mds[96162]: mds.0.cache creating system inode with ino:0x603
Jan 23 18:36:27 compute-0 ceph-mds[96162]: mds.0.cache creating system inode with ino:0x604
Jan 23 18:36:27 compute-0 ceph-mds[96162]: mds.0.cache creating system inode with ino:0x605
Jan 23 18:36:27 compute-0 ceph-mds[96162]: mds.0.cache creating system inode with ino:0x606
Jan 23 18:36:27 compute-0 ceph-mds[96162]: mds.0.cache creating system inode with ino:0x607
Jan 23 18:36:27 compute-0 ceph-mds[96162]: mds.0.cache creating system inode with ino:0x608
Jan 23 18:36:27 compute-0 ceph-mds[96162]: mds.0.cache creating system inode with ino:0x609
Jan 23 18:36:27 compute-0 podman[96317]: 2026-01-23 18:36:27.19622261 +0000 UTC m=+0.479028018 container init 75a8e48e1c7cad5b2c58648a6aa902b520d5c46cc272fe165647e2cf93fcc942 (image=quay.io/ceph/ceph:v20, name=tender_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 18:36:27 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.kgapua=up:creating}
Jan 23 18:36:27 compute-0 podman[96317]: 2026-01-23 18:36:27.208233107 +0000 UTC m=+0.491038405 container start 75a8e48e1c7cad5b2c58648a6aa902b520d5c46cc272fe165647e2cf93fcc942 (image=quay.io/ceph/ceph:v20, name=tender_babbage, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 23 18:36:27 compute-0 podman[96317]: 2026-01-23 18:36:27.240730428 +0000 UTC m=+0.523535766 container attach 75a8e48e1c7cad5b2c58648a6aa902b520d5c46cc272fe165647e2cf93fcc942 (image=quay.io/ceph/ceph:v20, name=tender_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:36:27 compute-0 ceph-mds[96162]: mds.0.4 creating_done
Jan 23 18:36:27 compute-0 ceph-mon[75097]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.kgapua is now active in filesystem cephfs as rank 0
Jan 23 18:36:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:36:27 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 18:36:27 compute-0 tender_babbage[96347]: 
Jan 23 18:36:27 compute-0 tender_babbage[96347]: [{"container_id": "90dd94238ca7", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.22%", "created": "2026-01-23T18:34:51.547610Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-23T18:34:51.616324Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-23T18:36:11.328543Z", "memory_usage": 7799308, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-01-23T18:34:50.810466Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4@crash.compute-0", "version": "20.2.0"}, {"daemon_id": "cephfs.compute-0.kgapua", "daemon_name": "mds.cephfs.compute-0.kgapua", "daemon_type": "mds", "events": ["2026-01-23T18:36:26.184143Z daemon:mds.cephfs.compute-0.kgapua [INFO] \"Deployed mds.cephfs.compute-0.kgapua on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "3aebf20d6487", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "17.81%", "created": "2026-01-23T18:34:09.421593Z", "daemon_id": "compute-0.rszfwe", "daemon_name": "mgr.compute-0.rszfwe", "daemon_type": "mgr", "events": ["2026-01-23T18:34:56.134319Z daemon:mgr.compute-0.rszfwe [INFO] \"Reconfigured mgr.compute-0.rszfwe on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-23T18:36:11.328473Z", "memory_usage": 546308096, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-23T18:34:09.313326Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4@mgr.compute-0.rszfwe", "version": "20.2.0"}, {"container_id": "3f8d5ffe26bf", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.32%", "created": "2026-01-23T18:34:04.100466Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-23T18:34:55.552217Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-23T18:36:11.328373Z", "memory_request": 2147483648, "memory_usage": 38828769, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-01-23T18:34:06.606208Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4@mon.compute-0", "version": "20.2.0"}, {"container_id": "41ddd2197559", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.88%", "created": "2026-01-23T18:35:20.492309Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-23T18:35:20.572082Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-23T18:36:11.328634Z", "memory_request": 4294967296, "memory_usage": 56811847, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-23T18:35:20.365471Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4@osd.0", "version": "20.2.0"}, {"container_id": "aff29668a1c7", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "2.10%", "created": "2026-01-23T18:35:25.628438Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-23T18:35:25.784421Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-23T18:36:11.328701Z", "memory_request": 4294967296, "memory_usage": 57912852, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-23T18:35:25.452416Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4@osd.1", "version": "20.2.0"}, {"container_id": "23cb766d14d0", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "2.11%", "created": "2026-01-23T18:35:30.548564Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-23T18:35:30.686792Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-23T18:36:11.328767Z", "memory_request": 4294967296, "memory_usage": 57105448, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-23T18:35:30.337942Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4@osd.2", "version": "20.2.0"}, {"daemon_id": "rgw.compute-0.kehgrd", "daemon_name": "rgw.rgw.compute-0.kehgrd", "daemon_type": "rgw", "events": ["2026-01-23T18:36:22.451193Z daemon:rgw.rgw.compute-0.kehgrd [INFO] \"Deployed rgw.rgw.compute-0.kehgrd on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "pending_daemon_config": true, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Jan 23 18:36:27 compute-0 systemd[1]: libpod-75a8e48e1c7cad5b2c58648a6aa902b520d5c46cc272fe165647e2cf93fcc942.scope: Deactivated successfully.
Jan 23 18:36:27 compute-0 podman[96317]: 2026-01-23 18:36:27.694944194 +0000 UTC m=+0.977749492 container died 75a8e48e1c7cad5b2c58648a6aa902b520d5c46cc272fe165647e2cf93fcc942 (image=quay.io/ceph/ceph:v20, name=tender_babbage, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:36:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3de5f149aeac0d14e09def726bb02384ce378bd302566d0090cd8c4c2ab40ac-merged.mount: Deactivated successfully.
Jan 23 18:36:27 compute-0 podman[96317]: 2026-01-23 18:36:27.755865435 +0000 UTC m=+1.038670743 container remove 75a8e48e1c7cad5b2c58648a6aa902b520d5c46cc272fe165647e2cf93fcc942 (image=quay.io/ceph/ceph:v20, name=tender_babbage, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:36:27 compute-0 systemd[1]: libpod-conmon-75a8e48e1c7cad5b2c58648a6aa902b520d5c46cc272fe165647e2cf93fcc942.scope: Deactivated successfully.
Jan 23 18:36:27 compute-0 sudo[96279]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:27 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v85: 9 pgs: 1 unknown, 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:27 compute-0 sudo[96231]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:36:27 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:36:28 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:36:28 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:36:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:36:28 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:36:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:36:28 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:36:28 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:36:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:36:28 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:36:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:36:28 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:36:28 compute-0 sudo[96569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:36:28 compute-0 sudo[96569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:28 compute-0 sudo[96569]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:28 compute-0 sudo[96594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:36:28 compute-0 sudo[96594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 23 18:36:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 23 18:36:28 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 23 18:36:28 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 36 pg[10.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 23 18:36:28 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3120534073' entity='client.rgw.rgw.compute-0.kehgrd' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 23 18:36:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).mds e5 new map
Jan 23 18:36:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2026-01-23T18:36:28:196476+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-23T18:36:03.926949+0000
                                           modified        2026-01-23T18:36:28.196474+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14260}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14260 members: 14260
                                           [mds.cephfs.compute-0.kgapua{0:14260} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2685709666,v1:192.168.122.100:6815/2685709666] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 23 18:36:28 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3120534073' entity='client.rgw.rgw.compute-0.kehgrd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 23 18:36:28 compute-0 ceph-mon[75097]: osdmap e35: 3 total, 3 up, 3 in
Jan 23 18:36:28 compute-0 ceph-mon[75097]: mds.? [v2:192.168.122.100:6814/2685709666,v1:192.168.122.100:6815/2685709666] up:boot
Jan 23 18:36:28 compute-0 ceph-mon[75097]: daemon mds.cephfs.compute-0.kgapua assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 23 18:36:28 compute-0 ceph-mon[75097]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 23 18:36:28 compute-0 ceph-mon[75097]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 23 18:36:28 compute-0 ceph-mon[75097]: Cluster is now healthy
Jan 23 18:36:28 compute-0 ceph-mon[75097]: fsmap cephfs:0 1 up:standby
Jan 23 18:36:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.kgapua"} : dispatch
Jan 23 18:36:28 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua Updating MDS map to version 5 from mon.0
Jan 23 18:36:28 compute-0 ceph-mon[75097]: fsmap cephfs:1 {0=cephfs.compute-0.kgapua=up:creating}
Jan 23 18:36:28 compute-0 ceph-mds[96162]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 23 18:36:28 compute-0 ceph-mon[75097]: daemon mds.cephfs.compute-0.kgapua is now active in filesystem cephfs as rank 0
Jan 23 18:36:28 compute-0 ceph-mds[96162]: mds.0.4 handle_mds_map state change up:creating --> up:active
Jan 23 18:36:28 compute-0 ceph-mds[96162]: mds.0.4 recovery_done -- successful recovery!
Jan 23 18:36:28 compute-0 ceph-mon[75097]: from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 18:36:28 compute-0 ceph-mds[96162]: mds.0.4 active_start
Jan 23 18:36:28 compute-0 ceph-mon[75097]: pgmap v85: 9 pgs: 1 unknown, 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:36:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:36:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:36:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:36:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:36:28 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2685709666,v1:192.168.122.100:6815/2685709666] up:active
Jan 23 18:36:28 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.kgapua=up:active}
Jan 23 18:36:28 compute-0 podman[96636]: 2026-01-23 18:36:28.4551634 +0000 UTC m=+0.040392980 container create 8c8ca124562a5fb8c8e7ba2a9d81bc1441bfe5aac009abb526a179aa2ed54651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:28 compute-0 systemd[1]: Started libpod-conmon-8c8ca124562a5fb8c8e7ba2a9d81bc1441bfe5aac009abb526a179aa2ed54651.scope.
Jan 23 18:36:28 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:28 compute-0 podman[96636]: 2026-01-23 18:36:28.437205866 +0000 UTC m=+0.022435466 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:28 compute-0 podman[96636]: 2026-01-23 18:36:28.625466415 +0000 UTC m=+0.210696045 container init 8c8ca124562a5fb8c8e7ba2a9d81bc1441bfe5aac009abb526a179aa2ed54651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_cerf, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:36:28 compute-0 podman[96636]: 2026-01-23 18:36:28.634276264 +0000 UTC m=+0.219505864 container start 8c8ca124562a5fb8c8e7ba2a9d81bc1441bfe5aac009abb526a179aa2ed54651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 23 18:36:28 compute-0 elastic_cerf[96652]: 167 167
Jan 23 18:36:28 compute-0 systemd[1]: libpod-8c8ca124562a5fb8c8e7ba2a9d81bc1441bfe5aac009abb526a179aa2ed54651.scope: Deactivated successfully.
Jan 23 18:36:28 compute-0 podman[96636]: 2026-01-23 18:36:28.685064521 +0000 UTC m=+0.270294151 container attach 8c8ca124562a5fb8c8e7ba2a9d81bc1441bfe5aac009abb526a179aa2ed54651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:28 compute-0 podman[96636]: 2026-01-23 18:36:28.685987173 +0000 UTC m=+0.271216773 container died 8c8ca124562a5fb8c8e7ba2a9d81bc1441bfe5aac009abb526a179aa2ed54651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_cerf, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:36:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-53670cb3e5f981bb63db068e064f02046e8b7511caa778b03cbcf7a1829363f9-merged.mount: Deactivated successfully.
Jan 23 18:36:28 compute-0 sudo[96689]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqnhkahrcuazgbqaxoqdamnhtavusezx ; /usr/bin/python3'
Jan 23 18:36:28 compute-0 sudo[96689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:28 compute-0 podman[96636]: 2026-01-23 18:36:28.736720779 +0000 UTC m=+0.321950369 container remove 8c8ca124562a5fb8c8e7ba2a9d81bc1441bfe5aac009abb526a179aa2ed54651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_cerf, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 23 18:36:28 compute-0 systemd[1]: libpod-conmon-8c8ca124562a5fb8c8e7ba2a9d81bc1441bfe5aac009abb526a179aa2ed54651.scope: Deactivated successfully.
Jan 23 18:36:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:36:28
Jan 23 18:36:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:36:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Some PGs (0.200000) are unknown; try again later
Jan 23 18:36:28 compute-0 python3[96692]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:36:28 compute-0 podman[96701]: 2026-01-23 18:36:28.929845489 +0000 UTC m=+0.067500852 container create af2073f3cdfe06adb9b9d56f98e3f7ce607cd68a49686a1a9aeba4eee0dce3d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_tharp, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:36:28 compute-0 podman[96714]: 2026-01-23 18:36:28.958544309 +0000 UTC m=+0.050305506 container create 5b463b9653eeaf0f7720389ec88e321a2978293b03e45a173677fa402a7a6c1d (image=quay.io/ceph/ceph:v20, name=tender_raman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:28 compute-0 systemd[1]: Started libpod-conmon-af2073f3cdfe06adb9b9d56f98e3f7ce607cd68a49686a1a9aeba4eee0dce3d4.scope.
Jan 23 18:36:28 compute-0 podman[96701]: 2026-01-23 18:36:28.885535782 +0000 UTC m=+0.023191175 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:29 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0a59262665afa8ad794b79e8e2b8b2ed10ab3a4f441745784c673c59f57dfb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0a59262665afa8ad794b79e8e2b8b2ed10ab3a4f441745784c673c59f57dfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0a59262665afa8ad794b79e8e2b8b2ed10ab3a4f441745784c673c59f57dfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0a59262665afa8ad794b79e8e2b8b2ed10ab3a4f441745784c673c59f57dfb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0a59262665afa8ad794b79e8e2b8b2ed10ab3a4f441745784c673c59f57dfb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:29 compute-0 podman[96714]: 2026-01-23 18:36:28.937541599 +0000 UTC m=+0.029302816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:36:29 compute-0 podman[96701]: 2026-01-23 18:36:29.054596187 +0000 UTC m=+0.192251590 container init af2073f3cdfe06adb9b9d56f98e3f7ce607cd68a49686a1a9aeba4eee0dce3d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:29 compute-0 podman[96701]: 2026-01-23 18:36:29.064265956 +0000 UTC m=+0.201921329 container start af2073f3cdfe06adb9b9d56f98e3f7ce607cd68a49686a1a9aeba4eee0dce3d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:36:29 compute-0 systemd[1]: Started libpod-conmon-5b463b9653eeaf0f7720389ec88e321a2978293b03e45a173677fa402a7a6c1d.scope.
Jan 23 18:36:29 compute-0 podman[96701]: 2026-01-23 18:36:29.082708563 +0000 UTC m=+0.220363986 container attach af2073f3cdfe06adb9b9d56f98e3f7ce607cd68a49686a1a9aeba4eee0dce3d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_tharp, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 23 18:36:29 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9584c0895a1a2eec13d7e123e3aec1f063fee833e2505ea134d49fc4fab0ae42/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9584c0895a1a2eec13d7e123e3aec1f063fee833e2505ea134d49fc4fab0ae42/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 23 18:36:29 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3120534073' entity='client.rgw.rgw.compute-0.kehgrd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 23 18:36:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 23 18:36:29 compute-0 podman[96714]: 2026-01-23 18:36:29.283206705 +0000 UTC m=+0.374967902 container init 5b463b9653eeaf0f7720389ec88e321a2978293b03e45a173677fa402a7a6c1d (image=quay.io/ceph/ceph:v20, name=tender_raman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 18:36:29 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 23 18:36:29 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 37 pg[10.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:29 compute-0 ceph-mon[75097]: osdmap e36: 3 total, 3 up, 3 in
Jan 23 18:36:29 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3120534073' entity='client.rgw.rgw.compute-0.kehgrd' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 23 18:36:29 compute-0 ceph-mon[75097]: mds.? [v2:192.168.122.100:6814/2685709666,v1:192.168.122.100:6815/2685709666] up:active
Jan 23 18:36:29 compute-0 ceph-mon[75097]: fsmap cephfs:1 {0=cephfs.compute-0.kgapua=up:active}
Jan 23 18:36:29 compute-0 podman[96714]: 2026-01-23 18:36:29.29791758 +0000 UTC m=+0.389678777 container start 5b463b9653eeaf0f7720389ec88e321a2978293b03e45a173677fa402a7a6c1d (image=quay.io/ceph/ceph:v20, name=tender_raman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 23 18:36:29 compute-0 podman[96714]: 2026-01-23 18:36:29.303967959 +0000 UTC m=+0.395729156 container attach 5b463b9653eeaf0f7720389ec88e321a2978293b03e45a173677fa402a7a6c1d (image=quay.io/ceph/ceph:v20, name=tender_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:29 compute-0 magical_tharp[96729]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:36:29 compute-0 magical_tharp[96729]: --> All data devices are unavailable
Jan 23 18:36:29 compute-0 systemd[1]: libpod-af2073f3cdfe06adb9b9d56f98e3f7ce607cd68a49686a1a9aeba4eee0dce3d4.scope: Deactivated successfully.
Jan 23 18:36:29 compute-0 podman[96777]: 2026-01-23 18:36:29.617268704 +0000 UTC m=+0.028099807 container died af2073f3cdfe06adb9b9d56f98e3f7ce607cd68a49686a1a9aeba4eee0dce3d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:36:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 23 18:36:29 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1770202832' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 23 18:36:29 compute-0 tender_raman[96736]: 
Jan 23 18:36:29 compute-0 tender_raman[96736]: {"fsid":"4da2adac-d096-5ead-9ec5-3108259ba9e4","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":143,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":37,"num_osds":3,"num_up_osds":3,"osd_up_since":1769193338,"num_in_osds":3,"osd_in_since":1769193312,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7},{"state_name":"creating+peering","count":1},{"state_name":"unknown","count":1}],"num_pgs":9,"num_pools":9,"num_objects":2,"data_bytes":459280,"bytes_used":83873792,"bytes_avail":64328052736,"bytes_total":64411926528,"unknown_pgs_ratio":0.1111111119389534,"inactive_pgs_ratio":0.1111111119389534},"fsmap":{"epoch":5,"btime":"2026-01-23T18:36:28:196476+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.kgapua","status":"up:active","gid":14260}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-23T18:35:30.861225+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"26d3ee6f-7ac2-4dbf-9f71-3ebe5367809e":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 23 18:36:29 compute-0 systemd[1]: libpod-5b463b9653eeaf0f7720389ec88e321a2978293b03e45a173677fa402a7a6c1d.scope: Deactivated successfully.
Jan 23 18:36:29 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v88: 10 pgs: 1 creating+peering, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.73508810222308e-07 of space, bias 4.0, pg target 0.0008082105722667696 quantized to 16 (current 1)
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:36:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 23 18:36:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [progress INFO root] Writing back 5 completed events
Jan 23 18:36:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 18:36:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:36:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 23 18:36:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 23 18:36:30 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3120534073' entity='client.rgw.rgw.compute-0.kehgrd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 23 18:36:30 compute-0 ceph-mon[75097]: osdmap e37: 3 total, 3 up, 3 in
Jan 23 18:36:30 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1770202832' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 23 18:36:30 compute-0 ceph-mon[75097]: pgmap v88: 10 pgs: 1 creating+peering, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Jan 23 18:36:30 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 23 18:36:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b0a59262665afa8ad794b79e8e2b8b2ed10ab3a4f441745784c673c59f57dfb-merged.mount: Deactivated successfully.
Jan 23 18:36:30 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 23 18:36:30 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 38 pg[11.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 23 18:36:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3120534073' entity='client.rgw.rgw.compute-0.kehgrd' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 23 18:36:30 compute-0 ceph-mgr[75390]: [progress INFO root] update: starting ev 446d360e-4368-4161-9d5f-94cf4b827bf5 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 23 18:36:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 23 18:36:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 23 18:36:30 compute-0 podman[96777]: 2026-01-23 18:36:30.534933156 +0000 UTC m=+0.945764229 container remove af2073f3cdfe06adb9b9d56f98e3f7ce607cd68a49686a1a9aeba4eee0dce3d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 18:36:30 compute-0 systemd[1]: libpod-conmon-af2073f3cdfe06adb9b9d56f98e3f7ce607cd68a49686a1a9aeba4eee0dce3d4.scope: Deactivated successfully.
Jan 23 18:36:30 compute-0 podman[96714]: 2026-01-23 18:36:30.577326315 +0000 UTC m=+1.669087532 container died 5b463b9653eeaf0f7720389ec88e321a2978293b03e45a173677fa402a7a6c1d (image=quay.io/ceph/ceph:v20, name=tender_raman, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 23 18:36:30 compute-0 sudo[96594]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:30 compute-0 sudo[96809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:36:30 compute-0 sudo[96809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:30 compute-0 sudo[96809]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:30 compute-0 sudo[96834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:36:30 compute-0 sudo[96834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9584c0895a1a2eec13d7e123e3aec1f063fee833e2505ea134d49fc4fab0ae42-merged.mount: Deactivated successfully.
Jan 23 18:36:30 compute-0 podman[96792]: 2026-01-23 18:36:30.900260998 +0000 UTC m=+1.039325664 container remove 5b463b9653eeaf0f7720389ec88e321a2978293b03e45a173677fa402a7a6c1d (image=quay.io/ceph/ceph:v20, name=tender_raman, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 23 18:36:30 compute-0 systemd[1]: libpod-conmon-5b463b9653eeaf0f7720389ec88e321a2978293b03e45a173677fa402a7a6c1d.scope: Deactivated successfully.
Jan 23 18:36:30 compute-0 sudo[96689]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:31 compute-0 podman[96870]: 2026-01-23 18:36:31.106902853 +0000 UTC m=+0.058786906 container create 1ec91aede93274cff84fdf801fa64be970eff1c1a509da9a1f0275877bd141c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dubinsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 23 18:36:31 compute-0 podman[96870]: 2026-01-23 18:36:31.072953712 +0000 UTC m=+0.024837765 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:31 compute-0 systemd[1]: Started libpod-conmon-1ec91aede93274cff84fdf801fa64be970eff1c1a509da9a1f0275877bd141c8.scope.
Jan 23 18:36:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:31 compute-0 podman[96870]: 2026-01-23 18:36:31.221324605 +0000 UTC m=+0.173208678 container init 1ec91aede93274cff84fdf801fa64be970eff1c1a509da9a1f0275877bd141c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 23 18:36:31 compute-0 podman[96870]: 2026-01-23 18:36:31.229654631 +0000 UTC m=+0.181538674 container start 1ec91aede93274cff84fdf801fa64be970eff1c1a509da9a1f0275877bd141c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dubinsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:31 compute-0 podman[96870]: 2026-01-23 18:36:31.23244934 +0000 UTC m=+0.184333463 container attach 1ec91aede93274cff84fdf801fa64be970eff1c1a509da9a1f0275877bd141c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 23 18:36:31 compute-0 relaxed_dubinsky[96886]: 167 167
Jan 23 18:36:31 compute-0 systemd[1]: libpod-1ec91aede93274cff84fdf801fa64be970eff1c1a509da9a1f0275877bd141c8.scope: Deactivated successfully.
Jan 23 18:36:31 compute-0 podman[96870]: 2026-01-23 18:36:31.234274635 +0000 UTC m=+0.186158668 container died 1ec91aede93274cff84fdf801fa64be970eff1c1a509da9a1f0275877bd141c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dubinsky, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc15b5e6919537d805c9e432424367d048be2a57bddf0d5e57abc05376347076-merged.mount: Deactivated successfully.
Jan 23 18:36:31 compute-0 podman[96870]: 2026-01-23 18:36:31.280396767 +0000 UTC m=+0.232280800 container remove 1ec91aede93274cff84fdf801fa64be970eff1c1a509da9a1f0275877bd141c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:36:31 compute-0 systemd[1]: libpod-conmon-1ec91aede93274cff84fdf801fa64be970eff1c1a509da9a1f0275877bd141c8.scope: Deactivated successfully.
Jan 23 18:36:31 compute-0 podman[96913]: 2026-01-23 18:36:31.44699375 +0000 UTC m=+0.049743942 container create 3faf47a7d28eb2f7876a9959c3db881c551db1a945d6397b81caa1ba453e4817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 18:36:31 compute-0 systemd[1]: Started libpod-conmon-3faf47a7d28eb2f7876a9959c3db881c551db1a945d6397b81caa1ba453e4817.scope.
Jan 23 18:36:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 23 18:36:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3120534073' entity='client.rgw.rgw.compute-0.kehgrd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 23 18:36:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 23 18:36:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 23 18:36:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c138afa209f07eccca73469a76dd568ceeb76c6dfd3730ee71facbf579bf33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c138afa209f07eccca73469a76dd568ceeb76c6dfd3730ee71facbf579bf33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c138afa209f07eccca73469a76dd568ceeb76c6dfd3730ee71facbf579bf33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c138afa209f07eccca73469a76dd568ceeb76c6dfd3730ee71facbf579bf33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:31 compute-0 podman[96913]: 2026-01-23 18:36:31.423792556 +0000 UTC m=+0.026542828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:31 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 23 18:36:31 compute-0 podman[96913]: 2026-01-23 18:36:31.535614374 +0000 UTC m=+0.138364586 container init 3faf47a7d28eb2f7876a9959c3db881c551db1a945d6397b81caa1ba453e4817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 23 18:36:31 compute-0 ceph-mgr[75390]: [progress INFO root] update: starting ev c083b528-f946-474e-837b-59f827f3290c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 23 18:36:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 23 18:36:31 compute-0 ceph-mon[75097]: osdmap e38: 3 total, 3 up, 3 in
Jan 23 18:36:31 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3120534073' entity='client.rgw.rgw.compute-0.kehgrd' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 23 18:36:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 23 18:36:31 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 39 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:31 compute-0 podman[96913]: 2026-01-23 18:36:31.541868439 +0000 UTC m=+0.144618621 container start 3faf47a7d28eb2f7876a9959c3db881c551db1a945d6397b81caa1ba453e4817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_fermat, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 23 18:36:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 23 18:36:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 23 18:36:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 23 18:36:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3120534073' entity='client.rgw.rgw.compute-0.kehgrd' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 23 18:36:31 compute-0 podman[96913]: 2026-01-23 18:36:31.547835806 +0000 UTC m=+0.150586008 container attach 3faf47a7d28eb2f7876a9959c3db881c551db1a945d6397b81caa1ba453e4817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Jan 23 18:36:31 compute-0 sudo[96959]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxrmvjqubqrbommmhzhbpphncurmfanm ; /usr/bin/python3'
Jan 23 18:36:31 compute-0 sudo[96959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:31 compute-0 gallant_fermat[96929]: {
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:     "0": [
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:         {
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "devices": [
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "/dev/loop3"
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             ],
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "lv_name": "ceph_lv0",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "lv_size": "21470642176",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "name": "ceph_lv0",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "tags": {
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.cluster_name": "ceph",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.crush_device_class": "",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.encrypted": "0",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.objectstore": "bluestore",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.osd_id": "0",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.type": "block",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.vdo": "0",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.with_tpm": "0"
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             },
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "type": "block",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "vg_name": "ceph_vg0"
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:         }
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:     ],
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:     "1": [
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:         {
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "devices": [
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "/dev/loop4"
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             ],
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "lv_name": "ceph_lv1",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "lv_size": "21470642176",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "name": "ceph_lv1",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "tags": {
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.cluster_name": "ceph",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.crush_device_class": "",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.encrypted": "0",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.objectstore": "bluestore",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.osd_id": "1",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.type": "block",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.vdo": "0",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.with_tpm": "0"
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             },
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "type": "block",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "vg_name": "ceph_vg1"
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:         }
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:     ],
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:     "2": [
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:         {
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "devices": [
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "/dev/loop5"
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             ],
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "lv_name": "ceph_lv2",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "lv_size": "21470642176",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "name": "ceph_lv2",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "tags": {
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.cluster_name": "ceph",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.crush_device_class": "",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.encrypted": "0",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.objectstore": "bluestore",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.osd_id": "2",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.type": "block",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.vdo": "0",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:                 "ceph.with_tpm": "0"
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             },
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "type": "block",
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:             "vg_name": "ceph_vg2"
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:         }
Jan 23 18:36:31 compute-0 gallant_fermat[96929]:     ]
Jan 23 18:36:31 compute-0 gallant_fermat[96929]: }
Jan 23 18:36:31 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v91: 11 pgs: 1 unknown, 1 creating+peering, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Jan 23 18:36:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 18:36:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 23 18:36:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 18:36:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 23 18:36:31 compute-0 systemd[1]: libpod-3faf47a7d28eb2f7876a9959c3db881c551db1a945d6397b81caa1ba453e4817.scope: Deactivated successfully.
Jan 23 18:36:31 compute-0 podman[96913]: 2026-01-23 18:36:31.859135472 +0000 UTC m=+0.461885664 container died 3faf47a7d28eb2f7876a9959c3db881c551db1a945d6397b81caa1ba453e4817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_fermat, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 18:36:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8c138afa209f07eccca73469a76dd568ceeb76c6dfd3730ee71facbf579bf33-merged.mount: Deactivated successfully.
Jan 23 18:36:31 compute-0 python3[96961]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:36:31 compute-0 podman[96913]: 2026-01-23 18:36:31.908273178 +0000 UTC m=+0.511023360 container remove 3faf47a7d28eb2f7876a9959c3db881c551db1a945d6397b81caa1ba453e4817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:31 compute-0 systemd[1]: libpod-conmon-3faf47a7d28eb2f7876a9959c3db881c551db1a945d6397b81caa1ba453e4817.scope: Deactivated successfully.
Jan 23 18:36:31 compute-0 sudo[96834]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:31 compute-0 podman[96976]: 2026-01-23 18:36:31.954712347 +0000 UTC m=+0.041026046 container create 99ae1cc41c37b60a211b6393ceeb582dd45e1a35128ae20079806ac799880162 (image=quay.io/ceph/ceph:v20, name=busy_buck, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 18:36:31 compute-0 systemd[1]: Started libpod-conmon-99ae1cc41c37b60a211b6393ceeb582dd45e1a35128ae20079806ac799880162.scope.
Jan 23 18:36:32 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:32 compute-0 sudo[96989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f7837d990fbdd59c851126cc7444c9638fc87a5c322dd64393ae891bafb7db1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f7837d990fbdd59c851126cc7444c9638fc87a5c322dd64393ae891bafb7db1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:32 compute-0 podman[96976]: 2026-01-23 18:36:31.936370663 +0000 UTC m=+0.022684392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:36:32 compute-0 sudo[96989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:32 compute-0 sudo[96989]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:32 compute-0 podman[96976]: 2026-01-23 18:36:32.040775468 +0000 UTC m=+0.127089197 container init 99ae1cc41c37b60a211b6393ceeb582dd45e1a35128ae20079806ac799880162 (image=quay.io/ceph/ceph:v20, name=busy_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:36:32 compute-0 podman[96976]: 2026-01-23 18:36:32.047871673 +0000 UTC m=+0.134185382 container start 99ae1cc41c37b60a211b6393ceeb582dd45e1a35128ae20079806ac799880162 (image=quay.io/ceph/ceph:v20, name=busy_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:32 compute-0 podman[96976]: 2026-01-23 18:36:32.051718528 +0000 UTC m=+0.138032237 container attach 99ae1cc41c37b60a211b6393ceeb582dd45e1a35128ae20079806ac799880162 (image=quay.io/ceph/ceph:v20, name=busy_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 18:36:32 compute-0 sudo[97020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:36:32 compute-0 sudo[97020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:32 compute-0 ceph-mds[96162]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 23 18:36:32 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mds-cephfs-compute-0-kgapua[96158]: 2026-01-23T18:36:32.182+0000 7f4b563e0640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 23 18:36:32 compute-0 podman[97076]: 2026-01-23 18:36:32.372088887 +0000 UTC m=+0.044722208 container create db12859d30a941ce4da09dfcb71fd88a3947ed38b7c81e1a1f7717baabb35412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_perlman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:36:32 compute-0 systemd[1]: Started libpod-conmon-db12859d30a941ce4da09dfcb71fd88a3947ed38b7c81e1a1f7717baabb35412.scope.
Jan 23 18:36:32 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 23 18:36:32 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204892126' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 23 18:36:32 compute-0 busy_buck[97014]: 
Jan 23 18:36:32 compute-0 podman[97076]: 2026-01-23 18:36:32.349231022 +0000 UTC m=+0.021864363 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:32 compute-0 busy_buck[97014]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.kehgrd","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 23 18:36:32 compute-0 systemd[1]: libpod-99ae1cc41c37b60a211b6393ceeb582dd45e1a35128ae20079806ac799880162.scope: Deactivated successfully.
Jan 23 18:36:32 compute-0 podman[97076]: 2026-01-23 18:36:32.504771241 +0000 UTC m=+0.177404582 container init db12859d30a941ce4da09dfcb71fd88a3947ed38b7c81e1a1f7717baabb35412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_perlman, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 23 18:36:32 compute-0 podman[97076]: 2026-01-23 18:36:32.512758509 +0000 UTC m=+0.185391830 container start db12859d30a941ce4da09dfcb71fd88a3947ed38b7c81e1a1f7717baabb35412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 23 18:36:32 compute-0 zealous_perlman[97093]: 167 167
Jan 23 18:36:32 compute-0 podman[97076]: 2026-01-23 18:36:32.518019159 +0000 UTC m=+0.190652480 container attach db12859d30a941ce4da09dfcb71fd88a3947ed38b7c81e1a1f7717baabb35412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_perlman, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 18:36:32 compute-0 systemd[1]: libpod-db12859d30a941ce4da09dfcb71fd88a3947ed38b7c81e1a1f7717baabb35412.scope: Deactivated successfully.
Jan 23 18:36:32 compute-0 podman[97076]: 2026-01-23 18:36:32.518639045 +0000 UTC m=+0.191272366 container died db12859d30a941ce4da09dfcb71fd88a3947ed38b7c81e1a1f7717baabb35412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 18:36:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 23 18:36:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 23 18:36:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3120534073' entity='client.rgw.rgw.compute-0.kehgrd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 23 18:36:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 18:36:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 18:36:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 23 18:36:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-67cedacae2dd374ebab119600a54aecbe55a2f4da4438a5892dde49539a966d6-merged.mount: Deactivated successfully.
Jan 23 18:36:32 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 23 18:36:32 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 40 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=40 pruub=8.675239563s) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active pruub 74.968032837s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:32 compute-0 ceph-mgr[75390]: [progress INFO root] update: starting ev 0cc6f466-2c40-4228-bbc7-f7c2471578f4 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 23 18:36:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 23 18:36:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 23 18:36:32 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 40 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=40 pruub=8.675239563s) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown pruub 74.968032837s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:32 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3120534073' entity='client.rgw.rgw.compute-0.kehgrd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 23 18:36:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 23 18:36:32 compute-0 ceph-mon[75097]: osdmap e39: 3 total, 3 up, 3 in
Jan 23 18:36:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 23 18:36:32 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3120534073' entity='client.rgw.rgw.compute-0.kehgrd' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 23 18:36:32 compute-0 ceph-mon[75097]: pgmap v91: 11 pgs: 1 unknown, 1 creating+peering, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Jan 23 18:36:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 23 18:36:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 23 18:36:32 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4204892126' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 23 18:36:32 compute-0 podman[96976]: 2026-01-23 18:36:32.567378741 +0000 UTC m=+0.653692450 container died 99ae1cc41c37b60a211b6393ceeb582dd45e1a35128ae20079806ac799880162 (image=quay.io/ceph/ceph:v20, name=busy_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 23 18:36:32 compute-0 podman[97076]: 2026-01-23 18:36:32.595128037 +0000 UTC m=+0.267761358 container remove db12859d30a941ce4da09dfcb71fd88a3947ed38b7c81e1a1f7717baabb35412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_perlman, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 18:36:32 compute-0 systemd[1]: libpod-conmon-db12859d30a941ce4da09dfcb71fd88a3947ed38b7c81e1a1f7717baabb35412.scope: Deactivated successfully.
Jan 23 18:36:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f7837d990fbdd59c851126cc7444c9638fc87a5c322dd64393ae891bafb7db1-merged.mount: Deactivated successfully.
Jan 23 18:36:32 compute-0 podman[97098]: 2026-01-23 18:36:32.66431175 +0000 UTC m=+0.185059191 container remove 99ae1cc41c37b60a211b6393ceeb582dd45e1a35128ae20079806ac799880162 (image=quay.io/ceph/ceph:v20, name=busy_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:32 compute-0 systemd[1]: libpod-conmon-99ae1cc41c37b60a211b6393ceeb582dd45e1a35128ae20079806ac799880162.scope: Deactivated successfully.
Jan 23 18:36:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:36:32 compute-0 sudo[96959]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:32 compute-0 podman[97132]: 2026-01-23 18:36:32.773358439 +0000 UTC m=+0.045636480 container create 0abfb1f08bb564574ab2789c7602887c50b8285dd07461912d92519f35560f1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sinoussi, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 18:36:32 compute-0 systemd[1]: Started libpod-conmon-0abfb1f08bb564574ab2789c7602887c50b8285dd07461912d92519f35560f1c.scope.
Jan 23 18:36:32 compute-0 podman[97132]: 2026-01-23 18:36:32.75277623 +0000 UTC m=+0.025054311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:32 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4019300e845000d4444af50b54da25fcbe7ea4935d4c42cfc34a1697f2af16c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4019300e845000d4444af50b54da25fcbe7ea4935d4c42cfc34a1697f2af16c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4019300e845000d4444af50b54da25fcbe7ea4935d4c42cfc34a1697f2af16c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4019300e845000d4444af50b54da25fcbe7ea4935d4c42cfc34a1697f2af16c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:32 compute-0 podman[97132]: 2026-01-23 18:36:32.867223543 +0000 UTC m=+0.139501604 container init 0abfb1f08bb564574ab2789c7602887c50b8285dd07461912d92519f35560f1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 18:36:32 compute-0 podman[97132]: 2026-01-23 18:36:32.874640146 +0000 UTC m=+0.146918187 container start 0abfb1f08bb564574ab2789c7602887c50b8285dd07461912d92519f35560f1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sinoussi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:32 compute-0 podman[97132]: 2026-01-23 18:36:32.898585338 +0000 UTC m=+0.170863399 container attach 0abfb1f08bb564574ab2789c7602887c50b8285dd07461912d92519f35560f1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:36:32 compute-0 radosgw[95149]: v1 topic migration: starting v1 topic migration..
Jan 23 18:36:32 compute-0 radosgw[95149]: v1 topic migration: finished v1 topic migration
Jan 23 18:36:32 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 40 pg[2.0( empty local-lis/les=19/20 n=0 ec=15/15 lis/c=19/19 les/c/f=20/20/0 sis=40 pruub=10.290744781s) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active pruub 71.926033020s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:32 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 40 pg[2.0( empty local-lis/les=19/20 n=0 ec=15/15 lis/c=19/19 les/c/f=20/20/0 sis=40 pruub=10.290744781s) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown pruub 71.926033020s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:32 compute-0 radosgw[95149]: framework: beast
Jan 23 18:36:32 compute-0 radosgw[95149]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 23 18:36:32 compute-0 radosgw[95149]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 23 18:36:32 compute-0 radosgw[95149]: starting handler: beast
Jan 23 18:36:33 compute-0 radosgw[95149]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 18:36:33 compute-0 radosgw[95149]: mgrc service_daemon_register rgw.14256 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.kehgrd,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864308,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=35a25f40-7f2b-48cd-9ce0-79825b74e847,zone_name=default,zonegroup_id=f732e1b3-dff4-42da-a978-f70b772fe2c4,zonegroup_name=default}
Jan 23 18:36:33 compute-0 lvm[97275]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:36:33 compute-0 lvm[97275]: VG ceph_vg0 finished
Jan 23 18:36:33 compute-0 lvm[97285]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:36:33 compute-0 lvm[97285]: VG ceph_vg1 finished
Jan 23 18:36:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 23 18:36:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 23 18:36:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 23 18:36:33 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.1e( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-mgr[75390]: [progress INFO root] update: starting ev da27f65b-fcf8-4f1a-a4b5-82ee1c7a6b81 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.1( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.c( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.e( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.10( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.12( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.14( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Jan 23 18:36:33 compute-0 sudo[97288]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfmakcmxhqonozywemavlthfaicdanuh ; /usr/bin/python3'
Jan 23 18:36:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.1a( empty local-lis/les=19/20 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.1c( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.1a( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.19( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 sudo[97288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.18( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.1( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.b( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.2( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.c( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.d( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.e( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.f( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.10( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.11( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.12( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.13( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.14( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.15( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.16( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.1e( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.1( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=40/41 n=0 ec=15/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.c( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.e( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.10( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.14( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.1a( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.12( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.17( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.4( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 23 18:36:33 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3120534073' entity='client.rgw.rgw.compute-0.kehgrd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 23 18:36:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 18:36:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 18:36:33 compute-0 ceph-mon[75097]: osdmap e40: 3 total, 3 up, 3 in
Jan 23 18:36:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 23 18:36:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 23 18:36:33 compute-0 ceph-mon[75097]: osdmap e41: 3 total, 3 up, 3 in
Jan 23 18:36:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 23 18:36:33 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=19/19 les/c/f=20/20/0 sis=40) [2] r=0 lpr=40 pi=[19,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.1c( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.1a( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.19( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.18( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.1( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.b( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.0( empty local-lis/les=40/41 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.c( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.2( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.d( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.10( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.e( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.f( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.11( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.13( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.14( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.15( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.4( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.17( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.16( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 41 pg[3.12( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:33 compute-0 lvm[97291]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:36:33 compute-0 lvm[97291]: VG ceph_vg2 finished
Jan 23 18:36:33 compute-0 romantic_sinoussi[97167]: {}
Jan 23 18:36:33 compute-0 systemd[1]: libpod-0abfb1f08bb564574ab2789c7602887c50b8285dd07461912d92519f35560f1c.scope: Deactivated successfully.
Jan 23 18:36:33 compute-0 systemd[1]: libpod-0abfb1f08bb564574ab2789c7602887c50b8285dd07461912d92519f35560f1c.scope: Consumed 1.292s CPU time.
Jan 23 18:36:33 compute-0 podman[97132]: 2026-01-23 18:36:33.678659416 +0000 UTC m=+0.950937507 container died 0abfb1f08bb564574ab2789c7602887c50b8285dd07461912d92519f35560f1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 18:36:33 compute-0 python3[97292]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:36:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4019300e845000d4444af50b54da25fcbe7ea4935d4c42cfc34a1697f2af16c-merged.mount: Deactivated successfully.
Jan 23 18:36:33 compute-0 podman[97132]: 2026-01-23 18:36:33.74022748 +0000 UTC m=+1.012505511 container remove 0abfb1f08bb564574ab2789c7602887c50b8285dd07461912d92519f35560f1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 23 18:36:33 compute-0 systemd[1]: libpod-conmon-0abfb1f08bb564574ab2789c7602887c50b8285dd07461912d92519f35560f1c.scope: Deactivated successfully.
Jan 23 18:36:33 compute-0 podman[97302]: 2026-01-23 18:36:33.770233563 +0000 UTC m=+0.046345968 container create 1f2393f0a257aa961a9052ccca37ebe250e257817c3690cfab0facb7d86895f6 (image=quay.io/ceph/ceph:v20, name=wonderful_montalcini, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:36:33 compute-0 sudo[97020]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:36:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:36:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:33 compute-0 systemd[1]: Started libpod-conmon-1f2393f0a257aa961a9052ccca37ebe250e257817c3690cfab0facb7d86895f6.scope.
Jan 23 18:36:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cb89998143fc3a402c054622cf89462969290cfb1e199701eed9832ea20c61e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cb89998143fc3a402c054622cf89462969290cfb1e199701eed9832ea20c61e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:33 compute-0 podman[97302]: 2026-01-23 18:36:33.83435053 +0000 UTC m=+0.110462955 container init 1f2393f0a257aa961a9052ccca37ebe250e257817c3690cfab0facb7d86895f6 (image=quay.io/ceph/ceph:v20, name=wonderful_montalcini, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 18:36:33 compute-0 podman[97302]: 2026-01-23 18:36:33.841277061 +0000 UTC m=+0.117389466 container start 1f2393f0a257aa961a9052ccca37ebe250e257817c3690cfab0facb7d86895f6 (image=quay.io/ceph/ceph:v20, name=wonderful_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 23 18:36:33 compute-0 podman[97302]: 2026-01-23 18:36:33.750165536 +0000 UTC m=+0.026277961 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:36:33 compute-0 podman[97302]: 2026-01-23 18:36:33.847341581 +0000 UTC m=+0.123453996 container attach 1f2393f0a257aa961a9052ccca37ebe250e257817c3690cfab0facb7d86895f6 (image=quay.io/ceph/ceph:v20, name=wonderful_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 18:36:33 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v94: 73 pgs: 63 unknown, 1 creating+peering, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 18:36:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 23 18:36:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 18:36:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 23 18:36:33 compute-0 sudo[97323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:36:33 compute-0 sudo[97323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:33 compute-0 sudo[97323]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:36:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:36:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:33 compute-0 sudo[97350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:36:33 compute-0 sudo[97350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:33 compute-0 sudo[97350]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:34 compute-0 sudo[97394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 23 18:36:34 compute-0 sudo[97394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:34 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 23 18:36:34 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 23 18:36:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 23 18:36:34 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2736542421' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 23 18:36:34 compute-0 wonderful_montalcini[97321]: mimic
Jan 23 18:36:34 compute-0 systemd[1]: libpod-1f2393f0a257aa961a9052ccca37ebe250e257817c3690cfab0facb7d86895f6.scope: Deactivated successfully.
Jan 23 18:36:34 compute-0 podman[97302]: 2026-01-23 18:36:34.254663773 +0000 UTC m=+0.530776198 container died 1f2393f0a257aa961a9052ccca37ebe250e257817c3690cfab0facb7d86895f6 (image=quay.io/ceph/ceph:v20, name=wonderful_montalcini, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:36:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cb89998143fc3a402c054622cf89462969290cfb1e199701eed9832ea20c61e-merged.mount: Deactivated successfully.
Jan 23 18:36:34 compute-0 podman[97302]: 2026-01-23 18:36:34.318052492 +0000 UTC m=+0.594164897 container remove 1f2393f0a257aa961a9052ccca37ebe250e257817c3690cfab0facb7d86895f6 (image=quay.io/ceph/ceph:v20, name=wonderful_montalcini, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:36:34 compute-0 sudo[97288]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:34 compute-0 systemd[1]: libpod-conmon-1f2393f0a257aa961a9052ccca37ebe250e257817c3690cfab0facb7d86895f6.scope: Deactivated successfully.
Jan 23 18:36:34 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 23 18:36:34 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 23 18:36:34 compute-0 podman[97476]: 2026-01-23 18:36:34.443714882 +0000 UTC m=+0.053711571 container exec 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 23 18:36:34 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 23 18:36:34 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 18:36:34 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 18:36:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 23 18:36:34 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 23 18:36:34 compute-0 ceph-mgr[75390]: [progress INFO root] update: starting ev 2afac7e3-da24-40a1-9fe0-35d6db45fa16 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 23 18:36:34 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 42 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=42 pruub=8.668819427s) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active pruub 82.137039185s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:34 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 42 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=42 pruub=8.668819427s) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown pruub 82.137039185s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:34 compute-0 podman[97476]: 2026-01-23 18:36:34.570739355 +0000 UTC m=+0.180735904 container exec_died 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 23 18:36:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 23 18:36:34 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 23 18:36:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:34 compute-0 ceph-mon[75097]: pgmap v94: 73 pgs: 63 unknown, 1 creating+peering, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 23 18:36:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 23 18:36:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:34 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2736542421' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 23 18:36:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 23 18:36:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 18:36:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 18:36:34 compute-0 ceph-mon[75097]: osdmap e42: 3 total, 3 up, 3 in
Jan 23 18:36:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 23 18:36:35 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 23 18:36:35 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=42 pruub=11.303168297s) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active pruub 75.114540100s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=42 pruub=11.303168297s) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown pruub 75.114540100s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 sudo[97649]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsabqigixwlnkaptsfwgwbqauymumoio ; /usr/bin/python3'
Jan 23 18:36:35 compute-0 sudo[97649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:36:35 compute-0 python3[97654]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:36:35 compute-0 sudo[97394]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:36:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:36:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:36:35 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:36:35 compute-0 podman[97686]: 2026-01-23 18:36:35.34909192 +0000 UTC m=+0.043421035 container create 1917ff64fae2a27f6be213d9608891c2c28830347653081a1130c8dec4eb9e33 (image=quay.io/ceph/ceph:v20, name=inspiring_kapitsa, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:36:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:36:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:36:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:36:35 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:36:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:36:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:36:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:36:35 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:36:35 compute-0 systemd[76480]: Starting Mark boot as successful...
Jan 23 18:36:35 compute-0 systemd[1]: Started libpod-conmon-1917ff64fae2a27f6be213d9608891c2c28830347653081a1130c8dec4eb9e33.scope.
Jan 23 18:36:35 compute-0 systemd[76480]: Finished Mark boot as successful.
Jan 23 18:36:35 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/327c79bcdc798eb9d0ea462839466b2308112835d6265acc4c7b2c65292efffe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/327c79bcdc798eb9d0ea462839466b2308112835d6265acc4c7b2c65292efffe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:35 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 23 18:36:35 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 23 18:36:35 compute-0 sudo[97702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:36:35 compute-0 sudo[97702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:35 compute-0 podman[97686]: 2026-01-23 18:36:35.326343478 +0000 UTC m=+0.020672623 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:36:35 compute-0 sudo[97702]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:35 compute-0 sudo[97730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:36:35 compute-0 sudo[97730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:35 compute-0 podman[97686]: 2026-01-23 18:36:35.524430371 +0000 UTC m=+0.218759526 container init 1917ff64fae2a27f6be213d9608891c2c28830347653081a1130c8dec4eb9e33 (image=quay.io/ceph/ceph:v20, name=inspiring_kapitsa, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Jan 23 18:36:35 compute-0 podman[97686]: 2026-01-23 18:36:35.531095085 +0000 UTC m=+0.225424200 container start 1917ff64fae2a27f6be213d9608891c2c28830347653081a1130c8dec4eb9e33 (image=quay.io/ceph/ceph:v20, name=inspiring_kapitsa, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:35 compute-0 podman[97686]: 2026-01-23 18:36:35.536055708 +0000 UTC m=+0.230384843 container attach 1917ff64fae2a27f6be213d9608891c2c28830347653081a1130c8dec4eb9e33 (image=quay.io/ceph/ceph:v20, name=inspiring_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 23 18:36:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 23 18:36:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 23 18:36:35 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 23 18:36:35 compute-0 ceph-mgr[75390]: [progress INFO root] update: starting ev e6771e4e-a266-40f5-b854-0cec0178cb08 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 23 18:36:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Jan 23 18:36:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.1f( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.1e( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.1d( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.b( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.19( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.3( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.c( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.15( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.16( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.17( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.1f( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.10( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.17( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.8( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.a( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.b( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.6( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.e( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.d( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.1b( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.1c( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.1f( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.10( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.17( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.8( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.a( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.0( empty local-lis/les=42/43 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.6( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.b( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.e( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.d( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.1b( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.1f( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.1e( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.b( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.1d( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.19( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.3( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=42/43 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.c( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.15( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.16( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.17( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.1c( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [2] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:35 compute-0 podman[97787]: 2026-01-23 18:36:35.756768972 +0000 UTC m=+0.048106073 container create 8e251a9b6ae33478070a854ad9f1900db9ff7aa824a2213bab91e8a843b5c0b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_tharp, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 23 18:36:35 compute-0 systemd[1]: Started libpod-conmon-8e251a9b6ae33478070a854ad9f1900db9ff7aa824a2213bab91e8a843b5c0b4.scope.
Jan 23 18:36:35 compute-0 ceph-mon[75097]: 3.1f scrub starts
Jan 23 18:36:35 compute-0 ceph-mon[75097]: 3.1f scrub ok
Jan 23 18:36:35 compute-0 ceph-mon[75097]: 2.1f scrub starts
Jan 23 18:36:35 compute-0 ceph-mon[75097]: 2.1f scrub ok
Jan 23 18:36:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:36:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:36:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:36:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:36:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:36:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 23 18:36:35 compute-0 ceph-mon[75097]: osdmap e43: 3 total, 3 up, 3 in
Jan 23 18:36:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 23 18:36:35 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:35 compute-0 podman[97787]: 2026-01-23 18:36:35.730138413 +0000 UTC m=+0.021475534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:35 compute-0 podman[97787]: 2026-01-23 18:36:35.836982027 +0000 UTC m=+0.128319138 container init 8e251a9b6ae33478070a854ad9f1900db9ff7aa824a2213bab91e8a843b5c0b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_tharp, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:36:35 compute-0 podman[97787]: 2026-01-23 18:36:35.843709403 +0000 UTC m=+0.135046494 container start 8e251a9b6ae33478070a854ad9f1900db9ff7aa824a2213bab91e8a843b5c0b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 23 18:36:35 compute-0 confident_tharp[97803]: 167 167
Jan 23 18:36:35 compute-0 systemd[1]: libpod-8e251a9b6ae33478070a854ad9f1900db9ff7aa824a2213bab91e8a843b5c0b4.scope: Deactivated successfully.
Jan 23 18:36:35 compute-0 podman[97787]: 2026-01-23 18:36:35.849873636 +0000 UTC m=+0.141210747 container attach 8e251a9b6ae33478070a854ad9f1900db9ff7aa824a2213bab91e8a843b5c0b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_tharp, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:36:35 compute-0 podman[97787]: 2026-01-23 18:36:35.850297826 +0000 UTC m=+0.141634917 container died 8e251a9b6ae33478070a854ad9f1900db9ff7aa824a2213bab91e8a843b5c0b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 23 18:36:35 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v97: 135 pgs: 1 peering, 62 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 134 KiB/s rd, 16 KiB/s wr, 353 op/s
Jan 23 18:36:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 18:36:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 23 18:36:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Jan 23 18:36:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 23 18:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b00f779f3c357d7545c6f2ddc4612a1f8af00bbd6b2f158c66a62bbf06389afc-merged.mount: Deactivated successfully.
Jan 23 18:36:35 compute-0 podman[97787]: 2026-01-23 18:36:35.898376007 +0000 UTC m=+0.189713098 container remove 8e251a9b6ae33478070a854ad9f1900db9ff7aa824a2213bab91e8a843b5c0b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:36:35 compute-0 systemd[1]: libpod-conmon-8e251a9b6ae33478070a854ad9f1900db9ff7aa824a2213bab91e8a843b5c0b4.scope: Deactivated successfully.
Jan 23 18:36:36 compute-0 podman[97827]: 2026-01-23 18:36:36.03794655 +0000 UTC m=+0.036946645 container create 320f11853ffe1aa36e0f5ee2ce7be310fc468fb9790020c1a83304d36c2656ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_kalam, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 18:36:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 23 18:36:36 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1570912300' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 23 18:36:36 compute-0 systemd[1]: Started libpod-conmon-320f11853ffe1aa36e0f5ee2ce7be310fc468fb9790020c1a83304d36c2656ae.scope.
Jan 23 18:36:36 compute-0 inspiring_kapitsa[97705]: 
Jan 23 18:36:36 compute-0 systemd[1]: libpod-1917ff64fae2a27f6be213d9608891c2c28830347653081a1130c8dec4eb9e33.scope: Deactivated successfully.
Jan 23 18:36:36 compute-0 conmon[97705]: conmon 1917ff64fae2a27f6be2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1917ff64fae2a27f6be213d9608891c2c28830347653081a1130c8dec4eb9e33.scope/container/memory.events
Jan 23 18:36:36 compute-0 inspiring_kapitsa[97705]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Jan 23 18:36:36 compute-0 podman[97686]: 2026-01-23 18:36:36.090182454 +0000 UTC m=+0.784511609 container died 1917ff64fae2a27f6be213d9608891c2c28830347653081a1130c8dec4eb9e33 (image=quay.io/ceph/ceph:v20, name=inspiring_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 23 18:36:36 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31a346f48a3fb8ba8b5ce26317e5382adffa96a34d61494793362a6e93bc348/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31a346f48a3fb8ba8b5ce26317e5382adffa96a34d61494793362a6e93bc348/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31a346f48a3fb8ba8b5ce26317e5382adffa96a34d61494793362a6e93bc348/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31a346f48a3fb8ba8b5ce26317e5382adffa96a34d61494793362a6e93bc348/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31a346f48a3fb8ba8b5ce26317e5382adffa96a34d61494793362a6e93bc348/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:36 compute-0 podman[97827]: 2026-01-23 18:36:36.020715814 +0000 UTC m=+0.019715929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-327c79bcdc798eb9d0ea462839466b2308112835d6265acc4c7b2c65292efffe-merged.mount: Deactivated successfully.
Jan 23 18:36:36 compute-0 podman[97827]: 2026-01-23 18:36:36.125928328 +0000 UTC m=+0.124928463 container init 320f11853ffe1aa36e0f5ee2ce7be310fc468fb9790020c1a83304d36c2656ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 23 18:36:36 compute-0 podman[97827]: 2026-01-23 18:36:36.134354817 +0000 UTC m=+0.133354912 container start 320f11853ffe1aa36e0f5ee2ce7be310fc468fb9790020c1a83304d36c2656ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_kalam, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:36:36 compute-0 podman[97686]: 2026-01-23 18:36:36.1470264 +0000 UTC m=+0.841355555 container remove 1917ff64fae2a27f6be213d9608891c2c28830347653081a1130c8dec4eb9e33 (image=quay.io/ceph/ceph:v20, name=inspiring_kapitsa, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 23 18:36:36 compute-0 systemd[1]: libpod-conmon-1917ff64fae2a27f6be213d9608891c2c28830347653081a1130c8dec4eb9e33.scope: Deactivated successfully.
Jan 23 18:36:36 compute-0 podman[97827]: 2026-01-23 18:36:36.157045528 +0000 UTC m=+0.156045643 container attach 320f11853ffe1aa36e0f5ee2ce7be310fc468fb9790020c1a83304d36c2656ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 18:36:36 compute-0 sudo[97649]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:36 compute-0 sad_kalam[97845]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:36:36 compute-0 sad_kalam[97845]: --> All data devices are unavailable
Jan 23 18:36:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 23 18:36:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 23 18:36:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 18:36:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 23 18:36:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 23 18:36:36 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 23 18:36:36 compute-0 ceph-mgr[75390]: [progress INFO root] update: starting ev 793ae404-e2af-4d41-9e0f-34d1669b8d70 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 23 18:36:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Jan 23 18:36:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 23 18:36:36 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 44 pg[6.0( v 36'39 (0'0,36'39] local-lis/les=23/24 n=22 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=44 pruub=10.848325729s) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 35'38 mlcod 35'38 active pruub 86.340728760s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:36 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 44 pg[6.0( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=23/24 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=44 pruub=10.848325729s) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 35'38 mlcod 0'0 unknown pruub 86.340728760s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:36 compute-0 systemd[1]: libpod-320f11853ffe1aa36e0f5ee2ce7be310fc468fb9790020c1a83304d36c2656ae.scope: Deactivated successfully.
Jan 23 18:36:36 compute-0 podman[97827]: 2026-01-23 18:36:36.602036442 +0000 UTC m=+0.601036537 container died 320f11853ffe1aa36e0f5ee2ce7be310fc468fb9790020c1a83304d36c2656ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_kalam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 23 18:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-f31a346f48a3fb8ba8b5ce26317e5382adffa96a34d61494793362a6e93bc348-merged.mount: Deactivated successfully.
Jan 23 18:36:36 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 23 18:36:36 compute-0 podman[97827]: 2026-01-23 18:36:36.643406117 +0000 UTC m=+0.642406212 container remove 320f11853ffe1aa36e0f5ee2ce7be310fc468fb9790020c1a83304d36c2656ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_kalam, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:36:36 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 23 18:36:36 compute-0 systemd[1]: libpod-conmon-320f11853ffe1aa36e0f5ee2ce7be310fc468fb9790020c1a83304d36c2656ae.scope: Deactivated successfully.
Jan 23 18:36:36 compute-0 sudo[97730]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:36 compute-0 sudo[97892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:36:36 compute-0 sudo[97892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:36 compute-0 sudo[97892]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:36 compute-0 sudo[97917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:36:36 compute-0 sudo[97917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:36 compute-0 ceph-mon[75097]: 3.1c scrub starts
Jan 23 18:36:36 compute-0 ceph-mon[75097]: 3.1c scrub ok
Jan 23 18:36:36 compute-0 ceph-mon[75097]: 2.1d scrub starts
Jan 23 18:36:36 compute-0 ceph-mon[75097]: 2.1d scrub ok
Jan 23 18:36:36 compute-0 ceph-mon[75097]: pgmap v97: 135 pgs: 1 peering, 62 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 134 KiB/s rd, 16 KiB/s wr, 353 op/s
Jan 23 18:36:36 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 23 18:36:36 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 23 18:36:36 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1570912300' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 23 18:36:36 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 23 18:36:36 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 18:36:36 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 23 18:36:36 compute-0 ceph-mon[75097]: osdmap e44: 3 total, 3 up, 3 in
Jan 23 18:36:36 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 23 18:36:36 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=44 pruub=11.558750153s) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active pruub 82.182586670s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:36 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=44 pruub=11.558750153s) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown pruub 82.182586670s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 podman[97954]: 2026-01-23 18:36:37.147912393 +0000 UTC m=+0.039753144 container create e159b78f1e73d46feb9c0a6626a63ab82b23f46ad6649413d1ecc541a63430bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_edison, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:36:37 compute-0 systemd[1]: Started libpod-conmon-e159b78f1e73d46feb9c0a6626a63ab82b23f46ad6649413d1ecc541a63430bf.scope.
Jan 23 18:36:37 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:37 compute-0 podman[97954]: 2026-01-23 18:36:37.225390891 +0000 UTC m=+0.117231662 container init e159b78f1e73d46feb9c0a6626a63ab82b23f46ad6649413d1ecc541a63430bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 18:36:37 compute-0 podman[97954]: 2026-01-23 18:36:37.130182704 +0000 UTC m=+0.022023455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:37 compute-0 podman[97954]: 2026-01-23 18:36:37.23342078 +0000 UTC m=+0.125261511 container start e159b78f1e73d46feb9c0a6626a63ab82b23f46ad6649413d1ecc541a63430bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:37 compute-0 podman[97954]: 2026-01-23 18:36:37.237370997 +0000 UTC m=+0.129211778 container attach e159b78f1e73d46feb9c0a6626a63ab82b23f46ad6649413d1ecc541a63430bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:37 compute-0 infallible_edison[97970]: 167 167
Jan 23 18:36:37 compute-0 systemd[1]: libpod-e159b78f1e73d46feb9c0a6626a63ab82b23f46ad6649413d1ecc541a63430bf.scope: Deactivated successfully.
Jan 23 18:36:37 compute-0 podman[97954]: 2026-01-23 18:36:37.239228213 +0000 UTC m=+0.131068944 container died e159b78f1e73d46feb9c0a6626a63ab82b23f46ad6649413d1ecc541a63430bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 23 18:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-126fc224d074f5ec8c6b176a2e4fa88ec7189ee90585f1fa6e0f6904d3ba88b7-merged.mount: Deactivated successfully.
Jan 23 18:36:37 compute-0 podman[97954]: 2026-01-23 18:36:37.277404759 +0000 UTC m=+0.169245490 container remove e159b78f1e73d46feb9c0a6626a63ab82b23f46ad6649413d1ecc541a63430bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_edison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 23 18:36:37 compute-0 systemd[1]: libpod-conmon-e159b78f1e73d46feb9c0a6626a63ab82b23f46ad6649413d1ecc541a63430bf.scope: Deactivated successfully.
Jan 23 18:36:37 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 23 18:36:37 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 23 18:36:37 compute-0 podman[97994]: 2026-01-23 18:36:37.427801901 +0000 UTC m=+0.042299648 container create 751151c0088182c70b12e06207714446476b3cc6a31cea45d9a862d92b7ecd7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 23 18:36:37 compute-0 systemd[1]: Started libpod-conmon-751151c0088182c70b12e06207714446476b3cc6a31cea45d9a862d92b7ecd7f.scope.
Jan 23 18:36:37 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7eb565c196bc336336289effd28b2e47134e68595b0b07c0b0fea0ed0bf2c2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7eb565c196bc336336289effd28b2e47134e68595b0b07c0b0fea0ed0bf2c2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7eb565c196bc336336289effd28b2e47134e68595b0b07c0b0fea0ed0bf2c2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7eb565c196bc336336289effd28b2e47134e68595b0b07c0b0fea0ed0bf2c2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:37 compute-0 podman[97994]: 2026-01-23 18:36:37.408064682 +0000 UTC m=+0.022562469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:37 compute-0 podman[97994]: 2026-01-23 18:36:37.509858782 +0000 UTC m=+0.124356549 container init 751151c0088182c70b12e06207714446476b3cc6a31cea45d9a862d92b7ecd7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:36:37 compute-0 podman[97994]: 2026-01-23 18:36:37.519139131 +0000 UTC m=+0.133636878 container start 751151c0088182c70b12e06207714446476b3cc6a31cea45d9a862d92b7ecd7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 23 18:36:37 compute-0 podman[97994]: 2026-01-23 18:36:37.522943266 +0000 UTC m=+0.137441033 container attach 751151c0088182c70b12e06207714446476b3cc6a31cea45d9a862d92b7ecd7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_agnesi, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 23 18:36:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 23 18:36:37 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 23 18:36:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 23 18:36:37 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 23 18:36:37 compute-0 ceph-mgr[75390]: [progress INFO root] update: starting ev 249a9492-e16d-4038-b438-71c50136f726 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 23 18:36:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Jan 23 18:36:37 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.a( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=23/24 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.5( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=23/24 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.9( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=23/24 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.4( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=23/24 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.7( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=23/24 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.8( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=23/24 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.b( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=23/24 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.6( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=23/24 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.1( v 36'39 (0'0,36'39] local-lis/les=23/24 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.3( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=23/24 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.2( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=23/24 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.e( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=23/24 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.f( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=23/24 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.c( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=23/24 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.d( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=23/24 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.5( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.12( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.10( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.4( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.17( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.16( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.14( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.b( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.d( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.7( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.1d( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.1e( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.19( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.7( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.8( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.6( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.1( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.0( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 35'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.3( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.e( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.2( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.c( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 45 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.12( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.10( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.b( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.17( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.14( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.0( empty local-lis/les=44/45 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.d( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.7( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.1d( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.16( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.1e( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.19( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:36:37 compute-0 angry_agnesi[98010]: {
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:     "0": [
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:         {
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "devices": [
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "/dev/loop3"
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             ],
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "lv_name": "ceph_lv0",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "lv_size": "21470642176",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "name": "ceph_lv0",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "tags": {
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.cluster_name": "ceph",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.crush_device_class": "",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.encrypted": "0",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.objectstore": "bluestore",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.osd_id": "0",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.type": "block",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.vdo": "0",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.with_tpm": "0"
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             },
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "type": "block",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "vg_name": "ceph_vg0"
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:         }
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:     ],
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:     "1": [
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:         {
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "devices": [
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "/dev/loop4"
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             ],
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "lv_name": "ceph_lv1",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "lv_size": "21470642176",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "name": "ceph_lv1",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "tags": {
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.cluster_name": "ceph",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.crush_device_class": "",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.encrypted": "0",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.objectstore": "bluestore",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.osd_id": "1",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.type": "block",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.vdo": "0",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.with_tpm": "0"
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             },
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "type": "block",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "vg_name": "ceph_vg1"
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:         }
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:     ],
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:     "2": [
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:         {
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "devices": [
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "/dev/loop5"
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             ],
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "lv_name": "ceph_lv2",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "lv_size": "21470642176",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "name": "ceph_lv2",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "tags": {
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.cluster_name": "ceph",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.crush_device_class": "",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.encrypted": "0",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.objectstore": "bluestore",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.osd_id": "2",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.type": "block",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.vdo": "0",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:                 "ceph.with_tpm": "0"
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             },
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "type": "block",
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:             "vg_name": "ceph_vg2"
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:         }
Jan 23 18:36:37 compute-0 angry_agnesi[98010]:     ]
Jan 23 18:36:37 compute-0 angry_agnesi[98010]: }
Jan 23 18:36:37 compute-0 systemd[1]: libpod-751151c0088182c70b12e06207714446476b3cc6a31cea45d9a862d92b7ecd7f.scope: Deactivated successfully.
Jan 23 18:36:37 compute-0 podman[97994]: 2026-01-23 18:36:37.825038893 +0000 UTC m=+0.439536670 container died 751151c0088182c70b12e06207714446476b3cc6a31cea45d9a862d92b7ecd7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_agnesi, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 23 18:36:37 compute-0 ceph-mon[75097]: 4.1c scrub starts
Jan 23 18:36:37 compute-0 ceph-mon[75097]: 4.1c scrub ok
Jan 23 18:36:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 23 18:36:37 compute-0 ceph-mon[75097]: osdmap e45: 3 total, 3 up, 3 in
Jan 23 18:36:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 23 18:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7eb565c196bc336336289effd28b2e47134e68595b0b07c0b0fea0ed0bf2c2c-merged.mount: Deactivated successfully.
Jan 23 18:36:37 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v100: 181 pgs: 1 peering, 108 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 134 KiB/s rd, 16 KiB/s wr, 353 op/s
Jan 23 18:36:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 18:36:37 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 23 18:36:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 18:36:37 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 23 18:36:37 compute-0 podman[97994]: 2026-01-23 18:36:37.871939223 +0000 UTC m=+0.486436970 container remove 751151c0088182c70b12e06207714446476b3cc6a31cea45d9a862d92b7ecd7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_agnesi, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:36:37 compute-0 systemd[1]: libpod-conmon-751151c0088182c70b12e06207714446476b3cc6a31cea45d9a862d92b7ecd7f.scope: Deactivated successfully.
Jan 23 18:36:37 compute-0 sudo[97917]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:37 compute-0 sudo[98031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:36:37 compute-0 sudo[98031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:37 compute-0 sudo[98031]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:38 compute-0 sudo[98056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:36:38 compute-0 sudo[98056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:38 compute-0 podman[98093]: 2026-01-23 18:36:38.319285756 +0000 UTC m=+0.050919682 container create 6b7708804a33f466a7c3e7c522c788ad50abe98aaf3aa3b8a9e6d0edc1034e16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hypatia, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:36:38 compute-0 systemd[1]: Started libpod-conmon-6b7708804a33f466a7c3e7c522c788ad50abe98aaf3aa3b8a9e6d0edc1034e16.scope.
Jan 23 18:36:38 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:38 compute-0 podman[98093]: 2026-01-23 18:36:38.291780735 +0000 UTC m=+0.023414711 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:38 compute-0 podman[98093]: 2026-01-23 18:36:38.390733484 +0000 UTC m=+0.122367370 container init 6b7708804a33f466a7c3e7c522c788ad50abe98aaf3aa3b8a9e6d0edc1034e16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 23 18:36:38 compute-0 podman[98093]: 2026-01-23 18:36:38.395758508 +0000 UTC m=+0.127392414 container start 6b7708804a33f466a7c3e7c522c788ad50abe98aaf3aa3b8a9e6d0edc1034e16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hypatia, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 23 18:36:38 compute-0 podman[98093]: 2026-01-23 18:36:38.399420269 +0000 UTC m=+0.131054165 container attach 6b7708804a33f466a7c3e7c522c788ad50abe98aaf3aa3b8a9e6d0edc1034e16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 18:36:38 compute-0 thirsty_hypatia[98109]: 167 167
Jan 23 18:36:38 compute-0 systemd[1]: libpod-6b7708804a33f466a7c3e7c522c788ad50abe98aaf3aa3b8a9e6d0edc1034e16.scope: Deactivated successfully.
Jan 23 18:36:38 compute-0 podman[98093]: 2026-01-23 18:36:38.400868925 +0000 UTC m=+0.132502831 container died 6b7708804a33f466a7c3e7c522c788ad50abe98aaf3aa3b8a9e6d0edc1034e16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:36:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-5384e265a997adadbd2ba94b1b9a34a1b455c41174741e1e602540bb448da022-merged.mount: Deactivated successfully.
Jan 23 18:36:38 compute-0 podman[98093]: 2026-01-23 18:36:38.438011645 +0000 UTC m=+0.169645531 container remove 6b7708804a33f466a7c3e7c522c788ad50abe98aaf3aa3b8a9e6d0edc1034e16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hypatia, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:36:38 compute-0 systemd[1]: libpod-conmon-6b7708804a33f466a7c3e7c522c788ad50abe98aaf3aa3b8a9e6d0edc1034e16.scope: Deactivated successfully.
Jan 23 18:36:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 23 18:36:38 compute-0 podman[98132]: 2026-01-23 18:36:38.576492561 +0000 UTC m=+0.023356868 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:36:38 compute-0 podman[98132]: 2026-01-23 18:36:38.721827589 +0000 UTC m=+0.168691876 container create 3b0daa39907884bbde141800f31359bf594647b49e20851312e7bd712c9176b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cray, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:36:38 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 23 18:36:38 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 18:36:38 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 18:36:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 23 18:36:38 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] update: starting ev f991f946-19ca-4da2-ba0e-e15cdf5ff7b1 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] complete: finished ev 446d360e-4368-4161-9d5f-94cf4b827bf5 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] Completed event 446d360e-4368-4161-9d5f-94cf4b827bf5 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 8 seconds
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] complete: finished ev c083b528-f946-474e-837b-59f827f3290c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] Completed event c083b528-f946-474e-837b-59f827f3290c (PG autoscaler increasing pool 3 PGs from 1 to 32) in 7 seconds
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] complete: finished ev 0cc6f466-2c40-4228-bbc7-f7c2471578f4 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] Completed event 0cc6f466-2c40-4228-bbc7-f7c2471578f4 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 6 seconds
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] complete: finished ev da27f65b-fcf8-4f1a-a4b5-82ee1c7a6b81 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] Completed event da27f65b-fcf8-4f1a-a4b5-82ee1c7a6b81 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 5 seconds
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] complete: finished ev 2afac7e3-da24-40a1-9fe0-35d6db45fa16 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] Completed event 2afac7e3-da24-40a1-9fe0-35d6db45fa16 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 4 seconds
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] complete: finished ev e6771e4e-a266-40f5-b854-0cec0178cb08 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] Completed event e6771e4e-a266-40f5-b854-0cec0178cb08 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 3 seconds
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] complete: finished ev 793ae404-e2af-4d41-9e0f-34d1669b8d70 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] Completed event 793ae404-e2af-4d41-9e0f-34d1669b8d70 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 2 seconds
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] complete: finished ev 249a9492-e16d-4038-b438-71c50136f726 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] Completed event 249a9492-e16d-4038-b438-71c50136f726 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 1 seconds
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] complete: finished ev f991f946-19ca-4da2-ba0e-e15cdf5ff7b1 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 23 18:36:38 compute-0 ceph-mgr[75390]: [progress INFO root] Completed event f991f946-19ca-4da2-ba0e-e15cdf5ff7b1 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 0 seconds
Jan 23 18:36:38 compute-0 systemd[1]: Started libpod-conmon-3b0daa39907884bbde141800f31359bf594647b49e20851312e7bd712c9176b6.scope.
Jan 23 18:36:38 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2488bc3a42a9bb824d7659911dfa95ad415bb0842b0ba7710b531d9e28f1ba05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2488bc3a42a9bb824d7659911dfa95ad415bb0842b0ba7710b531d9e28f1ba05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2488bc3a42a9bb824d7659911dfa95ad415bb0842b0ba7710b531d9e28f1ba05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2488bc3a42a9bb824d7659911dfa95ad415bb0842b0ba7710b531d9e28f1ba05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:36:38 compute-0 podman[98132]: 2026-01-23 18:36:38.796811145 +0000 UTC m=+0.243675462 container init 3b0daa39907884bbde141800f31359bf594647b49e20851312e7bd712c9176b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cray, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:36:38 compute-0 podman[98132]: 2026-01-23 18:36:38.804467554 +0000 UTC m=+0.251331851 container start 3b0daa39907884bbde141800f31359bf594647b49e20851312e7bd712c9176b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cray, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:36:38 compute-0 podman[98132]: 2026-01-23 18:36:38.810265347 +0000 UTC m=+0.257129634 container attach 3b0daa39907884bbde141800f31359bf594647b49e20851312e7bd712c9176b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 23 18:36:38 compute-0 ceph-mon[75097]: 2.1c scrub starts
Jan 23 18:36:38 compute-0 ceph-mon[75097]: 2.1c scrub ok
Jan 23 18:36:38 compute-0 ceph-mon[75097]: pgmap v100: 181 pgs: 1 peering, 108 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 134 KiB/s rd, 16 KiB/s wr, 353 op/s
Jan 23 18:36:38 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 23 18:36:38 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 23 18:36:38 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 23 18:36:38 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 18:36:38 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 18:36:38 compute-0 ceph-mon[75097]: osdmap e46: 3 total, 3 up, 3 in
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 46 pg[8.0( v 33'6 (0'0,33'6] local-lis/les=32/33 n=6 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=46 pruub=9.660744667s) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 33'5 mlcod 33'5 active pruub 82.617347717s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 46 pg[9.0( v 41'483 (0'0,41'483] local-lis/les=34/35 n=210 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=11.945671082s) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 41'482 mlcod 41'482 active pruub 84.902610779s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 46 pg[8.0( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=46 pruub=9.660744667s) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 33'5 mlcod 0'0 unknown pruub 82.617347717s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 46 pg[9.0( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=6 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=11.945671082s) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 41'482 mlcod 0'0 unknown pruub 84.902610779s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae19700 space 0x55944c330540 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae50880 space 0x55944a19e240 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae3a480 space 0x55944a55b740 0x0~98 clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae3ac80 space 0x55944a760540 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae35300 space 0x55944c321140 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae18500 space 0x55944c331d40 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae48400 space 0x55944a5f6840 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae50580 space 0x55944acbce40 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae50f00 space 0x55944b260e40 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae50d00 space 0x55944a1a3140 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae18f80 space 0x55944aec5740 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944a08d900 space 0x55944a0e5a40 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae18780 space 0x55944c2fb140 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae50980 space 0x55944acbc540 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae19880 space 0x55944aec5140 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae5a500 space 0x55944aebe540 0x0~98 clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae5aa80 space 0x55944c304e40 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae19180 space 0x55944a4fe840 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944adf6d80 space 0x55944b29d740 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae48500 space 0x55944a5f7140 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae19680 space 0x55944a1a3d40 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae18300 space 0x55944aec8240 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae20680 space 0x55944a181440 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae18380 space 0x55944aec4840 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae40380 space 0x55944b2a2e40 0x0~98 clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae3b900 space 0x55944a5f7a40 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae2ab00 space 0x55944c304540 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae2b180 space 0x55944aec8b40 0x0~98 clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae2a880 space 0x55944a4ff140 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae5af80 space 0x55944c2fba40 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae2b680 space 0x55944c2fa840 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae20780 space 0x55944b29ce40 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae5a280 space 0x55944c30d440 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae19300 space 0x55944b26ae40 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae51a00 space 0x55944b260540 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae35900 space 0x55944a55ab40 0x0~98 clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae3b980 space 0x55944b261740 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae34d80 space 0x55944a55a240 0x0~98 clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae20480 space 0x55944c320b40 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae5bc00 space 0x55944c321740 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae50280 space 0x55944a1a2840 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae50300 space 0x55944acbd740 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae5bb00 space 0x55944aebfd40 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae18a00 space 0x55944a180b40 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae5be00 space 0x55944a761440 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae2af00 space 0x55944aebf440 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae34300 space 0x55944b2a4540 0x0~98 clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae18800 space 0x55944a180240 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae18280 space 0x55944c305440 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae50a00 space 0x55944b228e40 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae40080 space 0x55944c305d40 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae19400 space 0x55944b228540 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae3b800 space 0x55944aebee40 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae35500 space 0x55944a19f140 0x0~9a clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae40800 space 0x55944a0f2e40 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae19c00 space 0x55944b229740 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae20a80 space 0x55944b29c540 0x0~6e clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae35d00 space 0x55944aecba40 0x0~98 clean)
Jan 23 18:36:39 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55944a48d680) split_cache   moving buffer(0x55944ae2a000 space 0x55944a4fa240 0x0~6e clean)
Jan 23 18:36:39 compute-0 lvm[98227]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:36:39 compute-0 lvm[98226]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:36:39 compute-0 lvm[98226]: VG ceph_vg0 finished
Jan 23 18:36:39 compute-0 lvm[98227]: VG ceph_vg1 finished
Jan 23 18:36:39 compute-0 lvm[98229]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:36:39 compute-0 lvm[98229]: VG ceph_vg2 finished
Jan 23 18:36:39 compute-0 lvm[98230]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:36:39 compute-0 lvm[98230]: VG ceph_vg0 finished
Jan 23 18:36:39 compute-0 loving_cray[98148]: {}
Jan 23 18:36:39 compute-0 systemd[1]: libpod-3b0daa39907884bbde141800f31359bf594647b49e20851312e7bd712c9176b6.scope: Deactivated successfully.
Jan 23 18:36:39 compute-0 systemd[1]: libpod-3b0daa39907884bbde141800f31359bf594647b49e20851312e7bd712c9176b6.scope: Consumed 1.287s CPU time.
Jan 23 18:36:39 compute-0 podman[98132]: 2026-01-23 18:36:39.585529406 +0000 UTC m=+1.032393693 container died 3b0daa39907884bbde141800f31359bf594647b49e20851312e7bd712c9176b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cray, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 23 18:36:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2488bc3a42a9bb824d7659911dfa95ad415bb0842b0ba7710b531d9e28f1ba05-merged.mount: Deactivated successfully.
Jan 23 18:36:39 compute-0 podman[98132]: 2026-01-23 18:36:39.634704243 +0000 UTC m=+1.081568530 container remove 3b0daa39907884bbde141800f31359bf594647b49e20851312e7bd712c9176b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cray, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Jan 23 18:36:39 compute-0 systemd[1]: libpod-conmon-3b0daa39907884bbde141800f31359bf594647b49e20851312e7bd712c9176b6.scope: Deactivated successfully.
Jan 23 18:36:39 compute-0 sudo[98056]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:36:39 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:36:39 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:39 compute-0 sudo[98247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:36:39 compute-0 sudo[98247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:36:39 compute-0 sudo[98247]: pam_unix(sudo:session): session closed for user root
Jan 23 18:36:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 23 18:36:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 23 18:36:39 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v103: 243 pgs: 62 unknown, 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Jan 23 18:36:39 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 23 18:36:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 18:36:39 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.14( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.15( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.14( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.15( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.16( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.17( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.17( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.16( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.10( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.11( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.11( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.10( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.12( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.13( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.c( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.12( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.d( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.d( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.13( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.c( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.e( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.f( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.8( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.9( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.a( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.b( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.3( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.2( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.1( v 33'6 (0'0,33'6] local-lis/les=32/33 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.1( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.f( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.e( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.b( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.a( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.9( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.8( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.2( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.3( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.7( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.6( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.6( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.7( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.5( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.4( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.4( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.5( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.1a( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.1b( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.1a( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.1b( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.19( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.18( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.18( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.19( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.1e( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.1f( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.1f( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.1e( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.1d( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.1c( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.1d( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=34/35 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.14( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.1c( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.15( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.17( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.16( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.14( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.10( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.11( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.10( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.12( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.c( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.13( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.d( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.e( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.8( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.a( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.0( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 33'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.3( v 33'6 (0'0,33'6] local-lis/les=46/47 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.2( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.0( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 41'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.1( v 33'6 (0'0,33'6] local-lis/les=46/47 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.f( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.e( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.b( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.a( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.9( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.2( v 33'6 (0'0,33'6] local-lis/les=46/47 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.7( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.6( v 33'6 (0'0,33'6] local-lis/les=46/47 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.4( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.5( v 33'6 (0'0,33'6] local-lis/les=46/47 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.4( v 33'6 (0'0,33'6] local-lis/les=46/47 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.1b( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.5( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.1a( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.1a( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.12( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.19( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.18( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.1e( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.1f( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.1e( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.1d( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.18( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[8.1c( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [1] r=0 lpr=46 pi=[32,46)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:39 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 47 pg[9.1c( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=34/34 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[34,46)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:40 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 23 18:36:40 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 23 18:36:40 compute-0 ceph-mgr[75390]: [progress INFO root] Writing back 14 completed events
Jan 23 18:36:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 18:36:40 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:40 compute-0 ceph-mon[75097]: pgmap v103: 243 pgs: 62 unknown, 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Jan 23 18:36:40 compute-0 ceph-mon[75097]: osdmap e47: 3 total, 3 up, 3 in
Jan 23 18:36:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 23 18:36:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:36:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 23 18:36:40 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 18:36:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 23 18:36:40 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 23 18:36:41 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 23 18:36:41 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 48 pg[10.0( v 40'18 (0'0,40'18] local-lis/les=36/37 n=9 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=48 pruub=11.699344635s) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 40'17 mlcod 40'17 active pruub 81.981819153s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 48 pg[10.0( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=48 pruub=11.699344635s) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 40'17 mlcod 0'0 unknown pruub 81.981819153s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v105: 274 pgs: 93 unknown, 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Jan 23 18:36:41 compute-0 ceph-mon[75097]: 2.1e scrub starts
Jan 23 18:36:41 compute-0 ceph-mon[75097]: 2.1e scrub ok
Jan 23 18:36:41 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 18:36:41 compute-0 ceph-mon[75097]: osdmap e48: 3 total, 3 up, 3 in
Jan 23 18:36:41 compute-0 ceph-mon[75097]: 3.1b scrub starts
Jan 23 18:36:41 compute-0 ceph-mon[75097]: 3.1b scrub ok
Jan 23 18:36:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 23 18:36:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 23 18:36:41 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.12( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.1e( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.1f( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.10( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.1c( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.11( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.1d( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.19( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.18( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.1a( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.1b( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.6( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=1 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.5( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=1 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.4( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=1 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.3( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=1 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.8( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=1 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.f( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.7( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=1 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.9( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=1 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.a( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.b( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.c( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.d( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.e( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.2( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=1 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.13( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.14( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.15( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.16( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.17( v 40'18 lc 0'0 (0'0,40'18] local-lis/les=36/37 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.1e( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.1( v 40'18 (0'0,40'18] local-lis/les=36/37 n=1 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.12( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.1d( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.10( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.1c( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.11( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.18( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.6( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.19( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.1a( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.4( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.3( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.5( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.f( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.0( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 40'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.9( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.a( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.1f( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.8( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.d( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.1b( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.e( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.b( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.2( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.15( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.16( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.c( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.13( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.14( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.1( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.7( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 49 pg[10.17( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:42 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 23 18:36:42 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 23 18:36:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:36:42 compute-0 ceph-mon[75097]: pgmap v105: 274 pgs: 93 unknown, 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Jan 23 18:36:42 compute-0 ceph-mon[75097]: osdmap e49: 3 total, 3 up, 3 in
Jan 23 18:36:42 compute-0 ceph-mon[75097]: 3.1d scrub starts
Jan 23 18:36:42 compute-0 ceph-mon[75097]: 3.1d scrub ok
Jan 23 18:36:43 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 23 18:36:43 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 23 18:36:43 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v107: 274 pgs: 93 unknown, 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 998 B/s rd, 0 B/s wr, 2 op/s
Jan 23 18:36:44 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 23 18:36:44 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 23 18:36:44 compute-0 ceph-mon[75097]: 2.6 scrub starts
Jan 23 18:36:44 compute-0 ceph-mon[75097]: 2.6 scrub ok
Jan 23 18:36:44 compute-0 ceph-mon[75097]: pgmap v107: 274 pgs: 93 unknown, 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 998 B/s rd, 0 B/s wr, 2 op/s
Jan 23 18:36:45 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 23 18:36:45 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 23 18:36:45 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v108: 274 pgs: 274 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 23 18:36:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 23 18:36:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 23 18:36:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 23 18:36:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 23 18:36:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 23 18:36:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 23 18:36:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 23 18:36:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 23 18:36:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 18:36:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 23 18:36:45 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.1f( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.031997681s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.315124512s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.1b( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.060334206s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343505859s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.319860458s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 active pruub 89.603050232s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.14( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.317678452s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.600860596s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.1f( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.031909943s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.315124512s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.319816589s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 89.603050232s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.14( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.317566872s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.600860596s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.15( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.319512367s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.603004456s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.1a( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.060042381s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343551636s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.1a( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.060024261s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343551636s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.15( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.319473267s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.603004456s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.1b( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.060243607s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343505859s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.1d( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.031509399s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.315086365s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.319177628s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 active pruub 89.603065491s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.1e( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.031267166s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.315185547s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.18( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.059550285s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343498230s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.319112778s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 89.603065491s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.18( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.059533119s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343498230s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.1d( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.031490326s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.315086365s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.1e( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.031209946s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.315185547s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.12( v 49'19 (0'0,49'19] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.368017197s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 40'18 active pruub 86.606849670s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.059297562s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343490601s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.1d( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.031682968s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.270545959s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.319005966s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 active pruub 89.603317261s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.059271812s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343490601s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.10( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.318929672s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.603248596s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.1d( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.031523705s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.270545959s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.1b( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.030829430s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.315162659s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.12( v 49'19 (0'0,49'19] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.367887497s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 40'18 unknown NOTIFY pruub 86.606849670s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.19( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.016251564s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255798340s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.19( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.016222954s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255798340s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.318985939s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 89.603317261s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.10( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.318907738s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.603248596s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.1b( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.030800819s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.315162659s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.11( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.318944931s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.603347778s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.11( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.318924904s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.603347778s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.12( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.318873405s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.603385925s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.12( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.318852425s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.603385925s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.18( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.031003952s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.315544128s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.058942795s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343505859s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.18( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.030983925s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.315544128s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.058924675s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343505859s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.7( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.031034470s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.315734863s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.c( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.318696022s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.603401184s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.10( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.367064476s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 active pruub 86.606971741s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.10( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.367045403s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 unknown NOTIFY pruub 86.606971741s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.17( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.015720367s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255775452s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.17( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.015702248s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255775452s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.1e( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.364789009s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 active pruub 86.605018616s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.1e( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.364769936s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 unknown NOTIFY pruub 86.605018616s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.16( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.015390396s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255775452s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.16( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.015374184s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255775452s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.11( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.035739899s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.276252747s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.1e( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.035706520s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.276252747s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.18( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.015151978s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255760193s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.11( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.366271019s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 active pruub 86.607124329s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.11( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.366243362s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 unknown NOTIFY pruub 86.607124329s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.12( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.035217285s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.276252747s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.12( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.035197258s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.276252747s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.15( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.014579773s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255737305s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.3( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.058758736s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343482971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.13( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.035097122s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.276351929s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.13( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.035078049s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.276351929s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.3( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.058736801s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343482971s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.13( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.014342308s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255737305s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.13( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.014327049s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255737305s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.15( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.014484406s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255737305s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.c( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.318663597s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.603401184s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[3.1d( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.318550110s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 active pruub 89.603401184s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.318516731s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 89.603401184s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.1e( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.034560204s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.276252747s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.7( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.030988693s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.315734863s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.18( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.014018059s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255760193s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.d( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.318241119s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.603431702s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[3.1e( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.d( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.318222046s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.603431702s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.11( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.034645081s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.276252747s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.2( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.058232307s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343482971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.2( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.058181763s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343482971s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.1( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.058036804s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343460083s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.317876816s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 active pruub 89.603347778s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.317858696s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 89.603347778s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.6( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.030206680s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.315734863s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.1( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.057969093s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343460083s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.6( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.030172348s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.315734863s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.14( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.033462524s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.276268005s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.14( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.033435822s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.276268005s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.1a( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.371163368s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 active pruub 86.614105225s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.e( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.317767143s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.603439331s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.15( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.033281326s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.276306152s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.15( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.033237457s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.276306152s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.19( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.370956421s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 active pruub 86.614059448s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.19( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.370941162s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 unknown NOTIFY pruub 86.614059448s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.11( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.012489319s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255737305s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.317755699s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 active pruub 89.603454590s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.1a( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.371080399s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 unknown NOTIFY pruub 86.614105225s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.11( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.012474060s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255737305s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.e( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.317747116s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.603439331s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.3( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.030225754s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.315963745s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.317734718s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 89.603454590s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.16( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.032747269s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.276313782s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.7( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.371274948s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 active pruub 86.614845276s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.7( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.371243477s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 unknown NOTIFY pruub 86.614845276s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.f( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.012063980s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255706787s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.5( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.030217171s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.315963745s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.f( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.012045860s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255706787s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.3( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.030206680s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.315963745s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.6( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.370266914s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 active pruub 86.614074707s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.9( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.032739639s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.276573181s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.9( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.032683372s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.276573181s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.6( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.370178223s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 unknown NOTIFY pruub 86.614074707s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.d( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.011695862s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255706787s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.d( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.011640549s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255706787s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.317705154s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 active pruub 89.603507996s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.4( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.369870186s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 active pruub 86.614128113s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.4( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.369850159s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 unknown NOTIFY pruub 86.614128113s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.c( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.032436371s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.276756287s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.c( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.032423019s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.276756287s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.057730675s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343566895s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.b( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.011123657s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255599976s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.317685127s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 89.603507996s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.1( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.030129433s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.315979004s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.b( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.011056900s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255599976s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.057712555s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343566895s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.5( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.030145645s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.315963745s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.1( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.030111313s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.315979004s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.317568779s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 active pruub 89.603500366s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.c( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.057496071s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343460083s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.317550659s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 89.603500366s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.8( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.031316757s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.317306519s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.c( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.057478905s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343460083s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.8( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.031298637s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.317306519s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.e( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.057416916s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343460083s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.e( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.057398796s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343460083s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.16( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.030908585s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.276313782s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.a( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.030085564s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.316200256s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320977211s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 active pruub 89.607109070s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.f( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.057303429s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343444824s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320962906s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 89.607109070s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.f( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.057281494s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343444824s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.f( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320914268s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.607116699s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.f( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320894241s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.607116699s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.f( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.368402481s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 active pruub 86.614181519s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.f( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.368385315s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 unknown NOTIFY pruub 86.614181519s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.8( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.009796143s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255607605s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.8( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.009777069s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255607605s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.f( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.030676842s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.276565552s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.f( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.030666351s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.276565552s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.8( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.368947983s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 active pruub 86.614631653s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.7( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.030484200s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.276557922s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.8( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.368573189s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 unknown NOTIFY pruub 86.614631653s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.7( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.030465126s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.276557922s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.9( v 49'19 (0'0,49'19] local-lis/les=48/49 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.368144989s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 40'18 active pruub 86.614250183s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.5( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.030516624s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.276794434s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.b( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.368395805s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 active pruub 86.614700317s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.5( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.030496597s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.276794434s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.9( v 49'19 (0'0,49'19] local-lis/les=48/49 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.368115425s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 40'18 unknown NOTIFY pruub 86.614250183s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.b( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.368326187s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 unknown NOTIFY pruub 86.614700317s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.7( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.009213448s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255630493s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.4( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.056978226s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343231201s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.a( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029956818s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.316200256s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.4( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.056941032s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343231201s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.b( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320734024s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.607154846s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.3( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.009003639s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255485535s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.3( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.008514404s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255485535s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.4( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.029788017s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.276802063s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.4( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.029764175s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.276802063s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.2( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.008452415s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255508423s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.2( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.008439064s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255508423s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.7( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.008694649s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255630493s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.3( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.029641151s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.276763916s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.4( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.008438110s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255477905s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.3( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.029623032s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.276763916s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.4( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.008310318s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255477905s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.b( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320713997s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.607154846s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.2( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.029263496s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.276779175s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.2( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.029225349s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.276779175s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.056722641s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343223572s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.9( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320680618s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.607238770s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.056654930s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343223572s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.d( v 49'19 (0'0,49'19] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.367019653s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 40'18 active pruub 86.614639282s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.d( v 49'19 (0'0,49'19] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.366834641s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 40'18 unknown NOTIFY pruub 86.614639282s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.9( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320650101s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.607238770s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.5( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.007536888s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255508423s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.5( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.007513046s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255508423s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.e( v 49'19 (0'0,49'19] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.366608620s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 40'18 active pruub 86.614677429s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.1( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.028731346s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.276824951s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.e( v 49'19 (0'0,49'19] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.366579056s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 40'18 unknown NOTIFY pruub 86.614677429s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.1( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.028711319s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.276824951s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.9( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.007251740s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255493164s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.9( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.007233620s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255493164s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.9( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029554367s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.316215515s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.a( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.019145966s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.267555237s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.a( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.019133568s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.267555237s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.13( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.366267204s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 active pruub 86.614715576s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.6( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.007028580s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255470276s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.13( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.366251945s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 unknown NOTIFY pruub 86.614715576s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.6( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.006989479s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255470276s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.1b( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.007177353s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255706787s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.14( v 49'19 (0'0,49'19] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.366233826s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 40'18 active pruub 86.614807129s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.1b( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.007116318s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255706787s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.14( v 49'19 (0'0,49'19] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.366212845s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 40'18 unknown NOTIFY pruub 86.614807129s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.15( v 49'19 (0'0,49'19] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.366022110s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 40'18 active pruub 86.614700317s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.1a( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.028445244s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.277153015s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.15( v 49'19 (0'0,49'19] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.366003036s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 40'18 unknown NOTIFY pruub 86.614700317s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.1a( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.028430939s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.277153015s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320708275s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 active pruub 89.607376099s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.2( v 33'6 (0'0,33'6] local-lis/les=46/47 n=1 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320581436s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.607246399s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.9( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029534340s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.316215515s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.2( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.365869522s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 active pruub 86.614753723s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.1d( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.006367683s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255264282s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.2( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.365690231s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 unknown NOTIFY pruub 86.614753723s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.1( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.365839005s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 active pruub 86.614830017s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.1c( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.006259918s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.255287170s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.1d( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.005935669s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255264282s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320687294s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 89.607376099s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.1( v 40'18 (0'0,40'18] local-lis/les=48/49 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.365448952s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 unknown NOTIFY pruub 86.614830017s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.19( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.053709030s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.303298950s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.16( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.365176201s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 active pruub 86.614776611s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.19( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.053682327s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.303298950s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.16( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.365155220s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 unknown NOTIFY pruub 86.614776611s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.1c( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.005702019s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.255287170s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.17( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.364957809s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 active pruub 86.614746094s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.8( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.056500435s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343215942s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.2( v 33'6 (0'0,33'6] local-lis/les=46/47 n=1 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320561409s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.607246399s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[10.17( v 40'18 (0'0,40'18] local-lis/les=48/49 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=11.364895821s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 unknown NOTIFY pruub 86.614746094s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.8( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.056484222s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343215942s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.c( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029437065s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.316215515s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.1f( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.999979019s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 86.250038147s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.c( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029414177s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.316215515s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[2.1f( empty local-lis/les=40/41 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.999907494s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 86.250038147s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.9( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.056344986s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343215942s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320436478s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 active pruub 89.607315063s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.9( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.056324959s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343215942s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.18( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.052886963s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 88.303291321s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.6( v 33'6 (0'0,33'6] local-lis/les=46/47 n=1 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320519447s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.607421875s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320419312s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 89.607315063s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.a( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.056096077s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343017578s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.6( v 33'6 (0'0,33'6] local-lis/les=46/47 n=1 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320500374s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.607421875s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.a( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.056077003s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343017578s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.e( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029606819s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.316627502s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[5.18( empty local-lis/les=42/43 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.052713394s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 88.303291321s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.e( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029584885s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.316627502s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.f( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029959679s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.317024231s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.f( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029942513s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.317024231s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.5( v 47'484 (0'0,47'484] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320363998s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 41'483 active pruub 89.607460022s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.4( v 33'6 (0'0,33'6] local-lis/les=46/47 n=1 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320351601s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.607452393s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.4( v 33'6 (0'0,33'6] local-lis/les=46/47 n=1 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320333481s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.607452393s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[3.1f( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.5( v 47'484 (0'0,47'484] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320333481s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 41'483 unknown NOTIFY pruub 89.607460022s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.1b( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320262909s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.607460022s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.11( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029821396s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.317054749s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.1b( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320243835s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.607460022s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[3.18( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.11( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029806137s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.317054749s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.055833817s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343093872s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.1a( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320237160s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.607505798s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320220947s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 active pruub 89.607536316s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.055788040s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343093872s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.1a( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320172310s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.607505798s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320199966s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 89.607536316s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[3.7( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.12( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029712677s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.317115784s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.12( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029699326s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.317115784s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[2.11( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.18( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320163727s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.607635498s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.18( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320144653s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.607635498s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320034981s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 active pruub 89.607543945s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320014954s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 89.607543945s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[5.14( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.1f( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.319993019s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.607582092s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[3.12( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.319960594s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 active pruub 89.607574463s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.1f( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.319975853s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.607582092s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.055845261s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343482971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.319940567s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 89.607574463s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[2.13( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.15( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029516220s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.317207336s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.055825233s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343482971s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.15( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029496193s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.317207336s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.1d( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.319849968s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.607620239s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[5.15( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.1d( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.319835663s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.607620239s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.16( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029431343s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.317222595s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.16( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029407501s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.317222595s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320132256s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 active pruub 89.607963562s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.055714607s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 95.343551636s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.055693626s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 95.343551636s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320111275s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 89.607963562s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.17( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029408455s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 91.317321777s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[3.17( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.029394150s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 91.317321777s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.1c( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320021629s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 active pruub 89.607971191s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[3.15( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[8.1c( v 33'6 (0'0,33'6] local-lis/les=46/47 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.320006371s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 89.607971191s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[5.1d( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[2.17( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[5.12( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[3.17( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[5.13( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[3.5( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[2.15( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[5.11( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[5.9( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[2.d( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[2.16( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[5.c( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[5.16( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[5.f( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[2.8( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-mon[75097]: 2.5 scrub starts
Jan 23 18:36:46 compute-0 ceph-mon[75097]: 2.5 scrub ok
Jan 23 18:36:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 23 18:36:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 23 18:36:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 23 18:36:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 23 18:36:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 23 18:36:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 23 18:36:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 23 18:36:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 23 18:36:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[3.8( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[2.3( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[3.e( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[2.7( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[2.4( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[3.9( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[5.1( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[2.b( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[2.5( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[2.9( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[2.a( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[2.6( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[3.a( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[2.1b( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[5.1a( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[5.19( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[5.18( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[5.3( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[3.11( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[3.6( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[5.2( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[2.1f( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[3.16( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[2.2( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[5.5( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[3.3( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[2.f( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[2.1c( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[5.4( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[2.1d( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[5.7( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[3.1( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[3.c( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[3.f( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[3.1b( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[2.18( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[5.1e( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[2.19( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.1c( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.009736061s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.495346069s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.1c( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.009687424s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.495346069s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.8( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.009603500s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.495353699s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.1b( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.010195732s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.496063232s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.1b( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.010174751s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.496063232s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.009500504s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.495513916s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.009483337s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.495513916s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[6.7( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.015851974s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 active pruub 100.502014160s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[6.7( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.015834808s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 100.502014160s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.5( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.009195328s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.495460510s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.5( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.009176254s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.495460510s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.7( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.009047508s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.495376587s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.013286591s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 active pruub 100.499641418s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.7( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.009028435s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.495376587s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.1a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.009069443s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.495452881s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.013260841s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 100.499641418s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.1a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.009055138s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.495452881s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.9( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.008961678s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.495468140s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[6.5( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.013123512s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 active pruub 100.499664307s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.9( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.008920670s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.495468140s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[6.5( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.013102531s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 100.499664307s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.4( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.009384155s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.495971680s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[6.1( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.015522003s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 active pruub 100.502182007s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.4( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.009343147s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.495971680s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[6.1( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.015509605s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 100.502182007s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.015584946s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 active pruub 100.502059937s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.2( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.009193420s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.495986938s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.015271187s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 100.502059937s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.2( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.009161949s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.495986938s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.1( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.008796692s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.495681763s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.1( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.008769989s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.495681763s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.d( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.009044647s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.496025085s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.d( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.009034157s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.496025085s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.015909195s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 active pruub 100.502929688s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[6.3( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.015214920s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 active pruub 100.502296448s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.e( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.008969307s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.496063232s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.015842438s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 active pruub 100.502944946s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.e( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.008953094s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.496063232s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.015831947s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 100.502944946s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[6.3( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.015196800s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 100.502296448s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=15.015892029s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 100.502929688s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.10( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.008769989s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.496040344s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.8( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.008546829s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.495353699s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.10( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.008737564s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.496040344s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.11( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.008728981s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.496055603s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.f( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.008698463s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.496047974s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.11( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.008712769s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.496055603s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.12( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.008675575s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.496055603s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.f( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.008680344s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.496047974s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.12( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=13.008666992s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.496055603s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[4.1c( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[4.1b( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[4.a( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[4.1a( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[4.1( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[6.7( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[4.5( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[4.7( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[6.9( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[4.9( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[6.5( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[4.4( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[6.1( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[6.b( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[4.2( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[4.d( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[6.d( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[6.f( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[4.8( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[4.10( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[4.f( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[4.12( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[6.3( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.13( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.994623184s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.496055603s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.13( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.994588852s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.496055603s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.14( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.994537354s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.496063232s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.14( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.994494438s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.496063232s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.18( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.994421005s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 98.496093750s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 50 pg[4.18( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.994397163s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 98.496093750s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 50 pg[4.14( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[4.e( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[4.11( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[4.13( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 50 pg[4.18( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 23 18:36:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 23 18:36:46 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[5.1e( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[2.19( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.5( v 47'484 (0'0,47'484] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.5( v 47'484 (0'0,47'484] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[4.1c( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[4.11( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[4.a( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[4.1( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[4.13( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[4.e( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[4.1b( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[4.18( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[4.1a( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[3.1e( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[8.15( v 33'6 (0'0,33'6] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[3.1d( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[8.11( v 33'6 (0'0,33'6] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[3.8( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[3.5( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[8.2( v 33'6 (0'0,33'6] local-lis/les=50/51 n=1 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[3.7( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[3.1b( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[2.18( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[8.10( v 33'6 (0'0,33'6] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[3.f( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[8.b( v 33'6 (0'0,33'6] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[10.9( v 49'19 lc 37'8 (0'0,49'19] local-lis/les=50/51 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=49'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[3.c( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[3.1( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[5.7( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[10.15( v 49'19 lc 37'3 (0'0,49'19] local-lis/les=50/51 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=49'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[2.1d( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[10.8( v 40'18 (0'0,40'18] local-lis/les=50/51 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[8.6( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=50/51 n=1 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=33'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[5.4( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[10.4( v 40'18 (0'0,40'18] local-lis/les=50/51 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[8.9( v 33'6 (0'0,33'6] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[2.1c( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[3.3( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[2.f( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[5.5( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[2.2( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[10.17( v 40'18 (0'0,40'18] local-lis/les=50/51 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[2.1f( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[8.f( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=33'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[3.6( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[8.e( v 33'6 (0'0,33'6] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[8.c( v 33'6 (0'0,33'6] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[10.d( v 49'19 lc 37'5 (0'0,49'19] local-lis/les=50/51 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=49'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[5.2( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[5.3( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[3.a( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[2.b( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[10.e( v 49'19 lc 37'4 (0'0,49'19] local-lis/les=50/51 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=49'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[3.9( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[2.8( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[10.1( v 40'18 (0'0,40'18] local-lis/les=50/51 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[10.1e( v 40'18 (0'0,40'18] local-lis/les=50/51 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[2.16( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[3.17( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[3.15( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[8.1d( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=33'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[8.1f( v 33'6 (0'0,33'6] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[5.15( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[2.13( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[3.12( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[5.14( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[8.1a( v 33'6 (0'0,33'6] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[2.11( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[10.7( v 40'18 (0'0,40'18] local-lis/les=50/51 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[8.14( v 33'6 (0'0,33'6] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[3.1f( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[10.16( v 40'18 (0'0,40'18] local-lis/les=50/51 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 51 pg[8.18( v 33'6 (0'0,33'6] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[8.d( v 33'6 (0'0,33'6] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[4.10( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[4.14( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[4.12( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[4.8( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[4.9( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[6.b( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=36'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[4.5( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[4.7( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[6.5( v 36'39 lc 35'7 (0'0,36'39] local-lis/les=50/51 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[6.1( v 36'39 (0'0,36'39] local-lis/les=50/51 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[4.d( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[4.f( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[6.f( v 36'39 lc 35'1 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[6.7( v 36'39 lc 35'13 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[4.4( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[8.4( v 33'6 (0'0,33'6] local-lis/les=50/51 n=1 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[3.e( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[3.11( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[8.1b( v 33'6 (0'0,33'6] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[8.1c( v 33'6 (0'0,33'6] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[8.12( v 33'6 (0'0,33'6] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[3.16( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[4.2( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 51 pg[3.18( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[6.3( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=50/51 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=36'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[5.19( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[5.18( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[10.14( v 49'19 lc 37'7 (0'0,49'19] local-lis/les=50/51 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=49'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[5.1a( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[10.12( v 49'19 lc 40'17 (0'0,49'19] local-lis/les=50/51 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=49'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[10.13( v 40'18 (0'0,40'18] local-lis/les=50/51 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[2.1b( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[6.d( v 36'39 lc 35'8 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[10.10( v 40'18 (0'0,40'18] local-lis/les=50/51 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[10.11( v 40'18 (0'0,40'18] local-lis/les=50/51 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[5.1( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[5.1d( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[2.6( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[10.f( v 40'18 (0'0,40'18] local-lis/les=50/51 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[2.7( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[2.4( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[5.c( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[2.9( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[2.a( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[10.2( v 40'18 (0'0,40'18] local-lis/les=50/51 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[2.3( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[2.5( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[10.b( v 40'18 (0'0,40'18] local-lis/les=50/51 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[5.f( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[5.9( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[2.d( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[10.6( v 40'18 (0'0,40'18] local-lis/les=50/51 n=1 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[5.16( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[10.19( v 40'18 (0'0,40'18] local-lis/les=50/51 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[10.1a( v 40'18 (0'0,40'18] local-lis/les=50/51 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=40'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[2.15( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[5.12( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[5.13( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[2.17( empty local-lis/les=50/51 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:46 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 51 pg[5.11( empty local-lis/les=50/51 n=0 ec=42/22 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:47 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 23 18:36:47 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 23 18:36:47 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v111: 274 pgs: 274 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:36:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 23 18:36:47 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 23 18:36:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 23 18:36:47 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 23 18:36:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 23 18:36:47 compute-0 ceph-mon[75097]: 2.4 scrub starts
Jan 23 18:36:47 compute-0 ceph-mon[75097]: 2.4 scrub ok
Jan 23 18:36:47 compute-0 ceph-mon[75097]: pgmap v108: 274 pgs: 274 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 18:36:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 18:36:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 18:36:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 23 18:36:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 18:36:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 23 18:36:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 18:36:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 18:36:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 18:36:47 compute-0 ceph-mon[75097]: osdmap e50: 3 total, 3 up, 3 in
Jan 23 18:36:47 compute-0 ceph-mon[75097]: osdmap e51: 3 total, 3 up, 3 in
Jan 23 18:36:48 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 23 18:36:48 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 23 18:36:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 23 18:36:48 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 23 18:36:48 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 52 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=13.240256310s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=36'39 lcod 0'0 active pruub 100.502952576s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:48 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 52 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=13.240192413s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 100.502952576s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:48 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 52 pg[6.6( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=13.238805771s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=36'39 lcod 0'0 active pruub 100.502090454s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:48 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 52 pg[6.6( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=13.238781929s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 100.502090454s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:48 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 52 pg[6.2( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=13.239380836s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=36'39 lcod 0'0 active pruub 100.502944946s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:48 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 52 pg[6.e( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=13.239337921s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=36'39 lcod 0'0 active pruub 100.502914429s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:48 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 52 pg[6.2( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=13.239349365s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 100.502944946s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:48 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 52 pg[6.e( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=13.239310265s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 100.502914429s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[6.a( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[6.e( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[6.6( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[6.2( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:48 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[9.5( v 47'484 (0'0,47'484] local-lis/les=51/52 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=47'484 lcod 41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:48 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 52 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:48 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 23 18:36:48 compute-0 ceph-mon[75097]: 5.1f scrub starts
Jan 23 18:36:48 compute-0 ceph-mon[75097]: 5.1f scrub ok
Jan 23 18:36:48 compute-0 ceph-mon[75097]: pgmap v111: 274 pgs: 274 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:36:48 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 23 18:36:48 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 23 18:36:48 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 23 18:36:48 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 23 18:36:48 compute-0 ceph-mon[75097]: osdmap e52: 3 total, 3 up, 3 in
Jan 23 18:36:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 23 18:36:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 23 18:36:49 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.5( v 47'484 (0'0,47'484] local-lis/les=0/0 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=47'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.5( v 47'484 (0'0,47'484] local-lis/les=0/0 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=47'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 53 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.028156281s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=41'483 lcod 0'0 active pruub 98.120712280s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.028095245s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 98.120712280s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.5( v 47'484 (0'0,47'484] local-lis/les=51/52 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.052180290s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=47'484 lcod 41'483 active pruub 98.144821167s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.5( v 47'484 (0'0,47'484] local-lis/les=51/52 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.052103043s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=47'484 lcod 41'483 unknown NOTIFY pruub 98.144821167s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.027954102s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=41'483 lcod 0'0 active pruub 98.120704651s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.051883698s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=41'483 lcod 0'0 active pruub 98.144638062s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.027905464s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 98.120704651s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.051932335s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=41'483 lcod 0'0 active pruub 98.144752502s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.051831245s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 98.144638062s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.051886559s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 98.144752502s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.051871300s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=41'483 lcod 0'0 active pruub 98.144813538s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.051678658s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=41'483 lcod 0'0 active pruub 98.144912720s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.051605225s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 98.144912720s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.051496506s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 98.144813538s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.051175117s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=41'483 lcod 0'0 active pruub 98.144683838s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.051136017s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 98.144683838s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.051320076s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=41'483 lcod 0'0 active pruub 98.144950867s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.051429749s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=41'483 lcod 0'0 active pruub 98.144912720s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.051271439s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 98.144950867s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.051204681s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 98.144912720s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=52/53 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[6.2( v 36'39 (0'0,36'39] local-lis/les=52/53 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[6.e( v 36'39 lc 35'12 (0'0,36'39] local-lis/les=52/53 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:49 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 53 pg[6.6( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=52/53 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=36'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:49 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v114: 274 pgs: 6 active+recovery_wait+remapped, 14 peering, 254 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 41/250 objects misplaced (16.400%); 1.1 KiB/s, 2 keys/s, 20 objects/s recovering
Jan 23 18:36:50 compute-0 ceph-mon[75097]: 10.1f scrub starts
Jan 23 18:36:50 compute-0 ceph-mon[75097]: 10.1f scrub ok
Jan 23 18:36:50 compute-0 ceph-mon[75097]: osdmap e53: 3 total, 3 up, 3 in
Jan 23 18:36:50 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 23 18:36:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 23 18:36:50 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 23 18:36:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 23 18:36:50 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 23 18:36:50 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:50 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 54 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=13.831717491s) [0] async=[0] r=-1 lpr=54 pi=[46,54)/1 crt=41'483 lcod 0'0 active pruub 98.145088196s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:50 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 54 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=13.831639290s) [0] r=-1 lpr=54 pi=[46,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 98.145088196s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:50 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 54 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=13.831376076s) [0] async=[0] r=-1 lpr=54 pi=[46,54)/1 crt=41'483 lcod 0'0 active pruub 98.145133972s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:50 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 54 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=13.831281662s) [0] r=-1 lpr=54 pi=[46,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 98.145133972s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:50 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 54 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=13.830999374s) [0] async=[0] r=-1 lpr=54 pi=[46,54)/1 crt=41'483 lcod 0'0 active pruub 98.145042419s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:50 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 54 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=13.830915451s) [0] r=-1 lpr=54 pi=[46,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 98.145042419s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:50 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 54 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=13.830579758s) [0] async=[0] r=-1 lpr=54 pi=[46,54)/1 crt=41'483 lcod 0'0 active pruub 98.145011902s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:50 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 54 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=13.830382347s) [0] async=[0] r=-1 lpr=54 pi=[46,54)/1 crt=41'483 lcod 0'0 active pruub 98.145011902s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:50 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 54 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=13.830509186s) [0] r=-1 lpr=54 pi=[46,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 98.145011902s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:50 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 54 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=13.830284119s) [0] r=-1 lpr=54 pi=[46,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 98.145011902s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:50 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 23 18:36:50 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 54 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=13.827898979s) [0] async=[0] r=-1 lpr=54 pi=[46,54)/1 crt=41'483 lcod 0'0 active pruub 98.144699097s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:50 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 54 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=13.827744484s) [0] r=-1 lpr=54 pi=[46,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 98.144699097s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=53/54 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.5( v 47'484 (0'0,47'484] local-lis/les=53/54 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=47'484 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=53/54 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=53/54 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=53/54 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=53/54 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=53/54 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=53/54 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=53/54 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:50 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 54 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=53/54 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:51 compute-0 ceph-mon[75097]: pgmap v114: 274 pgs: 6 active+recovery_wait+remapped, 14 peering, 254 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 41/250 objects misplaced (16.400%); 1.1 KiB/s, 2 keys/s, 20 objects/s recovering
Jan 23 18:36:51 compute-0 ceph-mon[75097]: 7.19 scrub starts
Jan 23 18:36:51 compute-0 ceph-mon[75097]: 7.19 scrub ok
Jan 23 18:36:51 compute-0 ceph-mon[75097]: osdmap e54: 3 total, 3 up, 3 in
Jan 23 18:36:51 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Jan 23 18:36:51 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Jan 23 18:36:51 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 23 18:36:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 23 18:36:51 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 23 18:36:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 23 18:36:51 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 23 18:36:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 55 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=54/55 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 55 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=54/55 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 55 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=54/55 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 55 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=54/55 n=7 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 55 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=54/55 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 55 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=54/55 n=6 ec=46/34 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:51 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v117: 274 pgs: 6 active+recovery_wait+remapped, 14 peering, 254 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 41/250 objects misplaced (16.400%); 1.1 KiB/s, 2 keys/s, 20 objects/s recovering
Jan 23 18:36:52 compute-0 ceph-mon[75097]: 4.1e scrub starts
Jan 23 18:36:52 compute-0 ceph-mon[75097]: 4.1e scrub ok
Jan 23 18:36:52 compute-0 ceph-mon[75097]: 8.16 scrub starts
Jan 23 18:36:52 compute-0 ceph-mon[75097]: 8.16 scrub ok
Jan 23 18:36:52 compute-0 ceph-mon[75097]: osdmap e55: 3 total, 3 up, 3 in
Jan 23 18:36:52 compute-0 ceph-mon[75097]: pgmap v117: 274 pgs: 6 active+recovery_wait+remapped, 14 peering, 254 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 41/250 objects misplaced (16.400%); 1.1 KiB/s, 2 keys/s, 20 objects/s recovering
Jan 23 18:36:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:36:53 compute-0 ceph-mon[75097]: 5.10 scrub starts
Jan 23 18:36:53 compute-0 ceph-mon[75097]: 5.10 scrub ok
Jan 23 18:36:53 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v118: 274 pgs: 6 active+recovery_wait+remapped, 14 peering, 254 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 41/250 objects misplaced (16.400%); 830 B/s, 1 keys/s, 14 objects/s recovering
Jan 23 18:36:54 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 23 18:36:54 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 23 18:36:55 compute-0 ceph-mon[75097]: pgmap v118: 274 pgs: 6 active+recovery_wait+remapped, 14 peering, 254 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 41/250 objects misplaced (16.400%); 830 B/s, 1 keys/s, 14 objects/s recovering
Jan 23 18:36:55 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 23 18:36:55 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 23 18:36:55 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v119: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 767 B/s, 15 objects/s recovering
Jan 23 18:36:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 23 18:36:55 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 23 18:36:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 23 18:36:55 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 23 18:36:56 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 23 18:36:56 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 23 18:36:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 23 18:36:56 compute-0 ceph-mon[75097]: 8.17 scrub starts
Jan 23 18:36:56 compute-0 ceph-mon[75097]: 8.17 scrub ok
Jan 23 18:36:56 compute-0 ceph-mon[75097]: pgmap v119: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 767 B/s, 15 objects/s recovering
Jan 23 18:36:56 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 23 18:36:56 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 23 18:36:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 23 18:36:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 23 18:36:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 23 18:36:56 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 23 18:36:56 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 56 pg[6.3( v 36'39 (0'0,36'39] local-lis/les=50/51 n=2 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=14.521621704s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=36'39 active pruub 104.704208374s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:56 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 56 pg[6.3( v 36'39 (0'0,36'39] local-lis/les=50/51 n=2 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=14.521573067s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=36'39 unknown NOTIFY pruub 104.704208374s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:56 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 56 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=14.520710945s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=36'39 active pruub 104.703483582s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:56 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 56 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=14.520656586s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=36'39 unknown NOTIFY pruub 104.703483582s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:56 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 56 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=14.520095825s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=36'39 active pruub 104.703247070s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:56 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 56 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=14.520015717s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=36'39 unknown NOTIFY pruub 104.703247070s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:56 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 56 pg[6.7( v 36'39 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=14.520149231s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=36'39 active pruub 104.703498840s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:56 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 56 pg[6.7( v 36'39 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=14.520059586s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=36'39 unknown NOTIFY pruub 104.703498840s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:56 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 56 pg[6.f( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:56 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 56 pg[6.3( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:56 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:56 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 56 pg[6.7( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:57 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 23 18:36:57 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 23 18:36:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 23 18:36:57 compute-0 ceph-mon[75097]: 2.1a scrub starts
Jan 23 18:36:57 compute-0 ceph-mon[75097]: 2.1a scrub ok
Jan 23 18:36:57 compute-0 ceph-mon[75097]: 3.1a scrub starts
Jan 23 18:36:57 compute-0 ceph-mon[75097]: 3.1a scrub ok
Jan 23 18:36:57 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 23 18:36:57 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 23 18:36:57 compute-0 ceph-mon[75097]: osdmap e56: 3 total, 3 up, 3 in
Jan 23 18:36:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 23 18:36:57 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 23 18:36:57 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 57 pg[6.b( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=56/57 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=36'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:57 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 57 pg[6.7( v 36'39 lc 35'13 (0'0,36'39] local-lis/les=56/57 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:57 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 57 pg[6.f( v 36'39 lc 35'1 (0'0,36'39] local-lis/les=56/57 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:57 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 57 pg[6.3( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=56/57 n=2 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=36'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:36:57 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v122: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 258 B/s, 6 objects/s recovering
Jan 23 18:36:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 23 18:36:57 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 23 18:36:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 23 18:36:57 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 23 18:36:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:36:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 23 18:36:58 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 23 18:36:58 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 23 18:36:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 23 18:36:58 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 23 18:36:58 compute-0 ceph-mon[75097]: 7.1e scrub starts
Jan 23 18:36:58 compute-0 ceph-mon[75097]: 7.1e scrub ok
Jan 23 18:36:58 compute-0 ceph-mon[75097]: osdmap e57: 3 total, 3 up, 3 in
Jan 23 18:36:58 compute-0 ceph-mon[75097]: pgmap v122: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 258 B/s, 6 objects/s recovering
Jan 23 18:36:58 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 23 18:36:58 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 23 18:36:59 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 23 18:36:59 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 23 18:36:59 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 58 pg[6.4( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=10.107419014s) [1] r=-1 lpr=58 pi=[44,58)/1 crt=36'39 lcod 0'0 active pruub 108.500022888s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:59 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 58 pg[6.c( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=10.109779358s) [1] r=-1 lpr=58 pi=[44,58)/1 crt=36'39 lcod 0'0 active pruub 108.503059387s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:36:59 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 58 pg[6.c( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=10.109750748s) [1] r=-1 lpr=58 pi=[44,58)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 108.503059387s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:59 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 58 pg[6.4( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=10.106723785s) [1] r=-1 lpr=58 pi=[44,58)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 108.500022888s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:36:59 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 58 pg[6.c( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:59 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 58 pg[6.4( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:36:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 23 18:36:59 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 23 18:36:59 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 23 18:36:59 compute-0 ceph-mon[75097]: osdmap e58: 3 total, 3 up, 3 in
Jan 23 18:36:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 23 18:36:59 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 23 18:36:59 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v125: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 161 B/s, 2 keys/s, 1 objects/s recovering
Jan 23 18:36:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 23 18:36:59 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 23 18:36:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 23 18:36:59 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 23 18:36:59 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 59 pg[6.4( v 36'39 lc 35'9 (0'0,36'39] local-lis/les=58/59 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:00 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 59 pg[6.c( v 36'39 lc 35'11 (0'0,36'39] local-lis/les=58/59 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:37:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:37:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:37:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:37:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:37:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:37:00 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 23 18:37:00 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 23 18:37:00 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 23 18:37:00 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 23 18:37:00 compute-0 ceph-mgr[75390]: [progress INFO root] Completed event 26d3ee6f-7ac2-4dbf-9f71-3ebe5367809e (Global Recovery Event) in 35 seconds
Jan 23 18:37:00 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 23 18:37:00 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 23 18:37:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 23 18:37:01 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 23 18:37:01 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 23 18:37:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 23 18:37:01 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 23 18:37:01 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 23 18:37:01 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 23 18:37:01 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 23 18:37:01 compute-0 ceph-mon[75097]: 10.1d scrub starts
Jan 23 18:37:01 compute-0 ceph-mon[75097]: 10.1d scrub ok
Jan 23 18:37:01 compute-0 ceph-mon[75097]: osdmap e59: 3 total, 3 up, 3 in
Jan 23 18:37:01 compute-0 ceph-mon[75097]: pgmap v125: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 161 B/s, 2 keys/s, 1 objects/s recovering
Jan 23 18:37:01 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 23 18:37:01 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 23 18:37:01 compute-0 anacron[7482]: Job `cron.daily' started
Jan 23 18:37:01 compute-0 anacron[7482]: Job `cron.daily' terminated
Jan 23 18:37:01 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 23 18:37:01 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v127: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 152 B/s, 2 keys/s, 1 objects/s recovering
Jan 23 18:37:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 23 18:37:01 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 23 18:37:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 23 18:37:01 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 23 18:37:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 23 18:37:02 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 23 18:37:02 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 23 18:37:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 23 18:37:02 compute-0 ceph-mon[75097]: 10.1c scrub starts
Jan 23 18:37:02 compute-0 ceph-mon[75097]: 3.19 scrub starts
Jan 23 18:37:02 compute-0 ceph-mon[75097]: 10.1c scrub ok
Jan 23 18:37:02 compute-0 ceph-mon[75097]: 3.19 scrub ok
Jan 23 18:37:02 compute-0 ceph-mon[75097]: 4.1f scrub starts
Jan 23 18:37:02 compute-0 ceph-mon[75097]: 4.1f scrub ok
Jan 23 18:37:02 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 23 18:37:02 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 23 18:37:02 compute-0 ceph-mon[75097]: 2.14 scrub starts
Jan 23 18:37:02 compute-0 ceph-mon[75097]: 2.14 scrub ok
Jan 23 18:37:02 compute-0 ceph-mon[75097]: 7.1d scrub starts
Jan 23 18:37:02 compute-0 ceph-mon[75097]: 7.1d scrub ok
Jan 23 18:37:02 compute-0 ceph-mon[75097]: osdmap e60: 3 total, 3 up, 3 in
Jan 23 18:37:02 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 23 18:37:02 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 23 18:37:02 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 23 18:37:02 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 23 18:37:02 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 23 18:37:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:37:03 compute-0 ceph-mon[75097]: pgmap v127: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 152 B/s, 2 keys/s, 1 objects/s recovering
Jan 23 18:37:03 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 23 18:37:03 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 23 18:37:03 compute-0 ceph-mon[75097]: osdmap e61: 3 total, 3 up, 3 in
Jan 23 18:37:03 compute-0 ceph-mon[75097]: 8.13 scrub starts
Jan 23 18:37:03 compute-0 ceph-mon[75097]: 8.13 scrub ok
Jan 23 18:37:03 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 61 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=61 pruub=8.300128937s) [2] r=-1 lpr=61 pi=[46,61)/1 crt=41'483 lcod 0'0 active pruub 105.603576660s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:03 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 61 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=61 pruub=8.300054550s) [2] r=-1 lpr=61 pi=[46,61)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.603576660s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:03 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=61) [2] r=0 lpr=61 pi=[46,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:03 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 60 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=15.410646439s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=36'39 active pruub 112.715408325s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:03 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 61 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=15.409893036s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=36'39 unknown NOTIFY pruub 112.715408325s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:03 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 61 pg[9.e( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=61 pruub=8.301457405s) [2] r=-1 lpr=61 pi=[46,61)/1 crt=41'483 lcod 0'0 active pruub 105.607772827s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:03 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 60 pg[6.5( v 36'39 (0'0,36'39] local-lis/les=50/51 n=2 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=15.397117615s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=36'39 active pruub 112.703567505s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:03 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 61 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=61 pruub=8.301720619s) [2] r=-1 lpr=61 pi=[46,61)/1 crt=41'483 lcod 0'0 active pruub 105.608299255s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:03 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 61 pg[6.5( v 36'39 (0'0,36'39] local-lis/les=50/51 n=2 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=15.397040367s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=36'39 unknown NOTIFY pruub 112.703567505s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:03 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 61 pg[9.e( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=61 pruub=8.301363945s) [2] r=-1 lpr=61 pi=[46,61)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.607772827s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:03 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 61 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=61 pruub=8.301680565s) [2] r=-1 lpr=61 pi=[46,61)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.608299255s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:03 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 61 pg[6.d( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=61 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:03 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 61 pg[9.1e( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=61 pruub=8.301455498s) [2] r=-1 lpr=61 pi=[46,61)/1 crt=41'483 lcod 0'0 active pruub 105.608650208s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:03 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 61 pg[9.1e( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=61 pruub=8.301405907s) [2] r=-1 lpr=61 pi=[46,61)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.608650208s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:03 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=61) [2] r=0 lpr=61 pi=[46,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:03 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=61) [2] r=0 lpr=61 pi=[46,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:03 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 61 pg[6.5( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=61 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:03 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=61) [2] r=0 lpr=61 pi=[46,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:03 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 23 18:37:03 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 23 18:37:03 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v129: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 124 B/s, 1 keys/s, 1 objects/s recovering
Jan 23 18:37:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 23 18:37:03 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 23 18:37:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 23 18:37:03 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 23 18:37:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 23 18:37:04 compute-0 ceph-mon[75097]: 4.1d scrub starts
Jan 23 18:37:04 compute-0 ceph-mon[75097]: 4.1d scrub ok
Jan 23 18:37:04 compute-0 ceph-mon[75097]: pgmap v129: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 124 B/s, 1 keys/s, 1 objects/s recovering
Jan 23 18:37:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 23 18:37:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 23 18:37:04 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 23 18:37:04 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 23 18:37:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 23 18:37:04 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 23 18:37:04 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[46,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:04 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[46,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:04 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[46,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:04 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[46,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:04 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[46,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:04 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[46,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:04 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 62 pg[9.e( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=0 lpr=62 pi=[46,62)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:04 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 62 pg[9.e( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=0 lpr=62 pi=[46,62)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:04 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 62 pg[9.1e( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=0 lpr=62 pi=[46,62)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:04 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 62 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=0 lpr=62 pi=[46,62)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:04 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[46,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:04 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 62 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=0 lpr=62 pi=[46,62)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:04 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 62 pg[9.1e( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=0 lpr=62 pi=[46,62)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:04 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 62 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=0 lpr=62 pi=[46,62)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:04 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 62 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=0 lpr=62 pi=[46,62)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:04 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[46,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:04 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 62 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=53/54 n=7 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=9.900753975s) [2] r=-1 lpr=62 pi=[53,62)/1 crt=41'483 active pruub 113.492713928s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:04 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 62 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=53/54 n=7 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=9.900726318s) [2] r=-1 lpr=62 pi=[53,62)/1 crt=41'483 unknown NOTIFY pruub 113.492713928s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:04 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 62 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=54/55 n=6 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=62 pruub=10.736350060s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=41'483 active pruub 114.329208374s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:04 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 62 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=54/55 n=7 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=62 pruub=10.736474037s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=41'483 active pruub 114.329376221s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:04 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 62 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=54/55 n=6 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=62 pruub=10.736305237s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=41'483 unknown NOTIFY pruub 114.329208374s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:04 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 62 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=54/55 n=7 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=62 pruub=10.736450195s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=41'483 unknown NOTIFY pruub 114.329376221s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:04 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 62 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=53/54 n=6 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=9.899603844s) [2] r=-1 lpr=62 pi=[53,62)/1 crt=41'483 active pruub 113.492759705s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:04 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 62 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=53/54 n=6 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=9.899589539s) [2] r=-1 lpr=62 pi=[53,62)/1 crt=41'483 unknown NOTIFY pruub 113.492759705s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:04 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=62) [2] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:04 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=62) [2] r=0 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:04 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=62) [2] r=0 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:04 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 62 pg[6.5( v 36'39 lc 35'7 (0'0,36'39] local-lis/les=60/62 n=2 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=61 pi=[50,60)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:04 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=62) [2] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:04 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 62 pg[6.d( v 36'39 lc 35'8 (0'0,36'39] local-lis/les=60/62 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=61 pi=[50,60)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:05 compute-0 ceph-mgr[75390]: [progress INFO root] Writing back 15 completed events
Jan 23 18:37:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 18:37:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 23 18:37:05 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v131: 274 pgs: 4 unknown, 2 peering, 268 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 270 B/s, 0 objects/s recovering
Jan 23 18:37:05 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:37:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 23 18:37:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 23 18:37:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 23 18:37:06 compute-0 ceph-mon[75097]: osdmap e62: 3 total, 3 up, 3 in
Jan 23 18:37:06 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 23 18:37:06 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:06 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:06 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:06 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:06 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:06 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:06 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:06 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:06 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 63 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=62/63 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[46,62)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:06 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 63 pg[9.e( v 41'483 (0'0,41'483] local-lis/les=62/63 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[46,62)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:06 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 23 18:37:06 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 63 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=53/54 n=7 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:06 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 63 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=54/55 n=6 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:06 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 63 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=54/55 n=7 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:06 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 63 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=53/54 n=7 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:06 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 63 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=54/55 n=6 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:06 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 63 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=54/55 n=7 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:06 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 63 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=53/54 n=6 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:06 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 63 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=53/54 n=6 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:06 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 63 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=62/63 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[46,62)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:06 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 63 pg[9.1e( v 41'483 (0'0,41'483] local-lis/les=62/63 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[46,62)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:06 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 23 18:37:06 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 23 18:37:06 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 23 18:37:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 23 18:37:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 23 18:37:07 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 64 pg[9.e( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 64 pg[9.e( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 64 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 64 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:07 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 64 pg[9.e( v 41'483 (0'0,41'483] local-lis/les=62/63 n=7 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64 pruub=15.129195213s) [2] async=[2] r=-1 lpr=64 pi=[46,64)/1 crt=41'483 lcod 0'0 active pruub 115.994651794s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:07 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 64 pg[9.e( v 41'483 (0'0,41'483] local-lis/les=62/63 n=7 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64 pruub=15.129118919s) [2] r=-1 lpr=64 pi=[46,64)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 115.994651794s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:07 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 64 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=62/63 n=6 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64 pruub=15.134515762s) [2] async=[2] r=-1 lpr=64 pi=[46,64)/1 crt=41'483 lcod 0'0 active pruub 116.000251770s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:07 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 64 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=62/63 n=6 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64 pruub=15.134423256s) [2] r=-1 lpr=64 pi=[46,64)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 116.000251770s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:07 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 64 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=62/63 n=7 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64 pruub=15.127936363s) [2] async=[2] r=-1 lpr=64 pi=[46,64)/1 crt=41'483 lcod 0'0 active pruub 115.994621277s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:07 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 64 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=62/63 n=7 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64 pruub=15.127868652s) [2] r=-1 lpr=64 pi=[46,64)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 115.994621277s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:07 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 64 pg[9.1e( v 41'483 (0'0,41'483] local-lis/les=62/63 n=6 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64 pruub=15.133390427s) [2] async=[2] r=-1 lpr=64 pi=[46,64)/1 crt=41'483 lcod 0'0 active pruub 116.000335693s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:07 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 64 pg[9.1e( v 41'483 (0'0,41'483] local-lis/les=62/63 n=6 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64 pruub=15.133320808s) [2] r=-1 lpr=64 pi=[46,64)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 116.000335693s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 64 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 64 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 64 pg[9.1e( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 64 pg[9.1e( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:07 compute-0 ceph-mon[75097]: pgmap v131: 274 pgs: 4 unknown, 2 peering, 268 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 270 B/s, 0 objects/s recovering
Jan 23 18:37:07 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:37:07 compute-0 ceph-mon[75097]: osdmap e63: 3 total, 3 up, 3 in
Jan 23 18:37:07 compute-0 ceph-mon[75097]: 10.1b scrub starts
Jan 23 18:37:07 compute-0 ceph-mon[75097]: 10.1b scrub ok
Jan 23 18:37:07 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 64 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=63/64 n=6 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:07 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 64 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=63/64 n=7 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:07 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 64 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=63/64 n=6 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:07 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 64 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=63/64 n=7 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:07 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 23 18:37:07 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 23 18:37:07 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v134: 274 pgs: 4 unknown, 2 peering, 268 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 277 B/s, 0 objects/s recovering
Jan 23 18:37:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:37:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 23 18:37:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 23 18:37:07 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 23 18:37:07 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 65 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=63/64 n=6 ec=46/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.205397606s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=41'483 active pruub 122.036209106s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:07 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 65 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=63/64 n=6 ec=46/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.205316544s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=41'483 unknown NOTIFY pruub 122.036209106s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:07 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 65 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=63/64 n=7 ec=46/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.209402084s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=41'483 active pruub 122.040420532s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:07 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 65 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=63/64 n=7 ec=46/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.209356308s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=41'483 unknown NOTIFY pruub 122.040420532s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:07 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 65 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=63/64 n=7 ec=46/34 lis/c=63/53 les/c/f=64/54/0 sis=65 pruub=15.209281921s) [2] async=[2] r=-1 lpr=65 pi=[53,65)/1 crt=41'483 active pruub 122.040550232s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:07 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 65 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=63/64 n=6 ec=46/34 lis/c=63/53 les/c/f=64/54/0 sis=65 pruub=15.208981514s) [2] async=[2] r=-1 lpr=65 pi=[53,65)/1 crt=41'483 active pruub 122.040420532s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:07 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 65 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=63/64 n=6 ec=46/34 lis/c=63/53 les/c/f=64/54/0 sis=65 pruub=15.208934784s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=41'483 unknown NOTIFY pruub 122.040420532s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:07 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 65 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=63/64 n=7 ec=46/34 lis/c=63/53 les/c/f=64/54/0 sis=65 pruub=15.209217072s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=41'483 unknown NOTIFY pruub 122.040550232s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 65 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 65 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 65 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 65 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 65 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 65 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 65 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 65 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 65 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=64/65 n=6 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 65 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=64/65 n=7 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 65 pg[9.1e( v 41'483 (0'0,41'483] local-lis/les=64/65 n=6 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:07 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 65 pg[9.e( v 41'483 (0'0,41'483] local-lis/les=64/65 n=7 ec=46/34 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:08 compute-0 ceph-mon[75097]: 4.b scrub starts
Jan 23 18:37:08 compute-0 ceph-mon[75097]: 4.b scrub ok
Jan 23 18:37:08 compute-0 ceph-mon[75097]: osdmap e64: 3 total, 3 up, 3 in
Jan 23 18:37:08 compute-0 ceph-mon[75097]: 4.6 scrub starts
Jan 23 18:37:08 compute-0 ceph-mon[75097]: 4.6 scrub ok
Jan 23 18:37:08 compute-0 ceph-mon[75097]: osdmap e65: 3 total, 3 up, 3 in
Jan 23 18:37:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 23 18:37:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 23 18:37:09 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 23 18:37:09 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 66 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=65/66 n=7 ec=46/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:09 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 66 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=65/66 n=6 ec=46/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:09 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 66 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=65/66 n=7 ec=46/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:09 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 66 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=65/66 n=6 ec=46/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:09 compute-0 ceph-mon[75097]: pgmap v134: 274 pgs: 4 unknown, 2 peering, 268 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 277 B/s, 0 objects/s recovering
Jan 23 18:37:09 compute-0 ceph-mon[75097]: osdmap e66: 3 total, 3 up, 3 in
Jan 23 18:37:09 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v137: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.0 KiB/s wr, 44 op/s; 538 B/s, 12 objects/s recovering
Jan 23 18:37:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 23 18:37:09 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 23 18:37:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 23 18:37:09 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 23 18:37:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 23 18:37:10 compute-0 ceph-mon[75097]: pgmap v137: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.0 KiB/s wr, 44 op/s; 538 B/s, 12 objects/s recovering
Jan 23 18:37:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 23 18:37:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 23 18:37:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 23 18:37:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 23 18:37:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 23 18:37:10 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 23 18:37:10 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 67 pg[6.8( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=67 pruub=14.661648750s) [2] r=-1 lpr=67 pi=[44,67)/1 crt=36'39 lcod 0'0 active pruub 124.502517700s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:10 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 67 pg[6.8( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=67 pruub=14.661406517s) [2] r=-1 lpr=67 pi=[44,67)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 124.502517700s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:10 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=67) [2] r=0 lpr=67 pi=[44,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:10 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 67 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=67 pruub=8.931033134s) [2] r=-1 lpr=67 pi=[46,67)/1 crt=41'483 lcod 0'0 active pruub 113.608192444s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:10 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 67 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=67 pruub=8.931002617s) [2] r=-1 lpr=67 pi=[46,67)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 113.608192444s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:10 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 67 pg[9.18( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=67 pruub=8.931161880s) [2] r=-1 lpr=67 pi=[46,67)/1 crt=41'483 lcod 0'0 active pruub 113.608650208s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:10 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 67 pg[9.18( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=67 pruub=8.931135178s) [2] r=-1 lpr=67 pi=[46,67)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 113.608650208s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:10 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=67) [2] r=0 lpr=67 pi=[46,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:10 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=67) [2] r=0 lpr=67 pi=[46,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 23 18:37:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 23 18:37:11 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 23 18:37:11 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[46,68)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:11 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[46,68)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:11 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[46,68)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:11 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[46,68)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:11 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 68 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] r=0 lpr=68 pi=[46,68)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:11 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 68 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] r=0 lpr=68 pi=[46,68)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:11 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 68 pg[9.18( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] r=0 lpr=68 pi=[46,68)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:11 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 68 pg[9.18( v 41'483 (0'0,41'483] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] r=0 lpr=68 pi=[46,68)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:11 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 68 pg[6.8( v 36'39 (0'0,36'39] local-lis/les=67/68 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=67) [2] r=0 lpr=67 pi=[44,67)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 23 18:37:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 23 18:37:11 compute-0 ceph-mon[75097]: osdmap e67: 3 total, 3 up, 3 in
Jan 23 18:37:11 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 23 18:37:11 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 23 18:37:11 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 23 18:37:11 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 23 18:37:11 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v140: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.0 KiB/s wr, 44 op/s; 538 B/s, 12 objects/s recovering
Jan 23 18:37:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 23 18:37:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 23 18:37:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 23 18:37:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 23 18:37:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 23 18:37:12 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 23 18:37:12 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 23 18:37:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 23 18:37:12 compute-0 ceph-mon[75097]: osdmap e68: 3 total, 3 up, 3 in
Jan 23 18:37:12 compute-0 ceph-mon[75097]: 2.12 scrub starts
Jan 23 18:37:12 compute-0 ceph-mon[75097]: 2.12 scrub ok
Jan 23 18:37:12 compute-0 ceph-mon[75097]: 7.7 scrub starts
Jan 23 18:37:12 compute-0 ceph-mon[75097]: 7.7 scrub ok
Jan 23 18:37:12 compute-0 ceph-mon[75097]: pgmap v140: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.0 KiB/s wr, 44 op/s; 538 B/s, 12 objects/s recovering
Jan 23 18:37:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 23 18:37:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 23 18:37:12 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 69 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=69 pruub=14.666599274s) [0] r=-1 lpr=69 pi=[50,69)/1 crt=36'39 lcod 0'0 active pruub 120.703712463s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:12 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 69 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=69 pruub=14.666554451s) [0] r=-1 lpr=69 pi=[50,69)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 120.703712463s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:12 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 23 18:37:12 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 69 pg[6.9( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:12 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 69 pg[9.18( v 41'483 (0'0,41'483] local-lis/les=68/69 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[46,68)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:12 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 69 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=68/69 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[46,68)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:12 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 23 18:37:12 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 23 18:37:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:37:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 23 18:37:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 23 18:37:13 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 23 18:37:13 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 70 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=68/46 les/c/f=69/47/0 sis=70) [2] r=0 lpr=70 pi=[46,70)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:13 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 70 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=68/46 les/c/f=69/47/0 sis=70) [2] r=0 lpr=70 pi=[46,70)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:13 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 70 pg[9.18( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=68/46 les/c/f=69/47/0 sis=70) [2] r=0 lpr=70 pi=[46,70)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:13 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 70 pg[9.18( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=68/46 les/c/f=69/47/0 sis=70) [2] r=0 lpr=70 pi=[46,70)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:13 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 70 pg[9.18( v 41'483 (0'0,41'483] local-lis/les=68/69 n=6 ec=46/34 lis/c=68/46 les/c/f=69/47/0 sis=70 pruub=14.967902184s) [2] async=[2] r=-1 lpr=70 pi=[46,70)/1 crt=41'483 lcod 0'0 active pruub 122.065818787s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:13 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 70 pg[9.18( v 41'483 (0'0,41'483] local-lis/les=68/69 n=6 ec=46/34 lis/c=68/46 les/c/f=69/47/0 sis=70 pruub=14.967842102s) [2] r=-1 lpr=70 pi=[46,70)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 122.065818787s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:13 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 70 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=68/69 n=7 ec=46/34 lis/c=68/46 les/c/f=69/47/0 sis=70 pruub=14.967590332s) [2] async=[2] r=-1 lpr=70 pi=[46,70)/1 crt=41'483 lcod 0'0 active pruub 122.065963745s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:13 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 70 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=68/69 n=7 ec=46/34 lis/c=68/46 les/c/f=69/47/0 sis=70 pruub=14.967418671s) [2] r=-1 lpr=70 pi=[46,70)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 122.065963745s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:13 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 70 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=69/70 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:13 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 23 18:37:13 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 23 18:37:13 compute-0 ceph-mon[75097]: osdmap e69: 3 total, 3 up, 3 in
Jan 23 18:37:13 compute-0 ceph-mon[75097]: 4.19 scrub starts
Jan 23 18:37:13 compute-0 ceph-mon[75097]: 4.19 scrub ok
Jan 23 18:37:13 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v143: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:37:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 23 18:37:13 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 23 18:37:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 23 18:37:13 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 23 18:37:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 23 18:37:14 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 23 18:37:14 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 23 18:37:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 23 18:37:14 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 23 18:37:14 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 71 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=70/71 n=7 ec=46/34 lis/c=68/46 les/c/f=69/47/0 sis=70) [2] r=0 lpr=70 pi=[46,70)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:14 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 71 pg[9.18( v 41'483 (0'0,41'483] local-lis/les=70/71 n=6 ec=46/34 lis/c=68/46 les/c/f=69/47/0 sis=70) [2] r=0 lpr=70 pi=[46,70)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:14 compute-0 ceph-mon[75097]: osdmap e70: 3 total, 3 up, 3 in
Jan 23 18:37:14 compute-0 ceph-mon[75097]: pgmap v143: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:37:14 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 23 18:37:14 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 23 18:37:14 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 23 18:37:14 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 23 18:37:14 compute-0 ceph-mon[75097]: osdmap e71: 3 total, 3 up, 3 in
Jan 23 18:37:14 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 71 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=52/53 n=1 ec=44/23 lis/c=52/52 les/c/f=53/53/0 sis=71 pruub=14.924622536s) [0] r=-1 lpr=71 pi=[52,71)/1 crt=36'39 lcod 0'0 active pruub 123.098289490s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:14 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 71 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=52/53 n=1 ec=44/23 lis/c=52/52 les/c/f=53/53/0 sis=71 pruub=14.924393654s) [0] r=-1 lpr=71 pi=[52,71)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 123.098289490s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:14 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 71 pg[6.a( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=52/52 les/c/f=53/53/0 sis=71) [0] r=0 lpr=71 pi=[52,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:15 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 23 18:37:15 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 23 18:37:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 23 18:37:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 23 18:37:15 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 23 18:37:15 compute-0 ceph-mon[75097]: 10.18 scrub starts
Jan 23 18:37:15 compute-0 ceph-mon[75097]: 10.18 scrub ok
Jan 23 18:37:15 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 72 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=71/72 n=1 ec=44/23 lis/c=52/52 les/c/f=53/53/0 sis=71) [0] r=0 lpr=71 pi=[52,71)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:15 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 23 18:37:15 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 23 18:37:15 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v146: 274 pgs: 3 peering, 271 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 131 B/s, 3 objects/s recovering
Jan 23 18:37:16 compute-0 ceph-mon[75097]: osdmap e72: 3 total, 3 up, 3 in
Jan 23 18:37:16 compute-0 ceph-mon[75097]: 8.8 scrub starts
Jan 23 18:37:16 compute-0 ceph-mon[75097]: 8.8 scrub ok
Jan 23 18:37:16 compute-0 ceph-mon[75097]: pgmap v146: 274 pgs: 3 peering, 271 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 131 B/s, 3 objects/s recovering
Jan 23 18:37:16 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 23 18:37:16 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 23 18:37:17 compute-0 ceph-mon[75097]: 4.3 scrub starts
Jan 23 18:37:17 compute-0 ceph-mon[75097]: 4.3 scrub ok
Jan 23 18:37:17 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.a scrub starts
Jan 23 18:37:17 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.a scrub ok
Jan 23 18:37:17 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v147: 274 pgs: 3 peering, 271 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 94 B/s, 2 objects/s recovering
Jan 23 18:37:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:37:18 compute-0 ceph-mon[75097]: 8.a scrub starts
Jan 23 18:37:18 compute-0 ceph-mon[75097]: 8.a scrub ok
Jan 23 18:37:18 compute-0 ceph-mon[75097]: pgmap v147: 274 pgs: 3 peering, 271 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 94 B/s, 2 objects/s recovering
Jan 23 18:37:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 23 18:37:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 23 18:37:19 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 23 18:37:19 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 23 18:37:19 compute-0 ceph-mon[75097]: 4.0 scrub starts
Jan 23 18:37:19 compute-0 ceph-mon[75097]: 4.0 scrub ok
Jan 23 18:37:19 compute-0 ceph-mon[75097]: 2.10 scrub starts
Jan 23 18:37:19 compute-0 ceph-mon[75097]: 2.10 scrub ok
Jan 23 18:37:19 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 23 18:37:19 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 23 18:37:19 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v148: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 80 B/s, 1 objects/s recovering
Jan 23 18:37:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 23 18:37:19 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 23 18:37:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 23 18:37:19 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 23 18:37:19 compute-0 sudo[98297]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlajulzaeuninuophbzttmulqtsnamci ; /usr/bin/python3'
Jan 23 18:37:19 compute-0 sudo[98297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:37:20 compute-0 python3[98299]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:37:20 compute-0 podman[98300]: 2026-01-23 18:37:20.138313605 +0000 UTC m=+0.050567603 container create e052e699202cfbc9d2ee461d14a0b534f4e89f17cc4cddb2c6916fb3d9c12519 (image=quay.io/ceph/ceph:v20, name=magical_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:37:20 compute-0 systemd[1]: Started libpod-conmon-e052e699202cfbc9d2ee461d14a0b534f4e89f17cc4cddb2c6916fb3d9c12519.scope.
Jan 23 18:37:20 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab25a3c7c4045fad0a590c4b54fc45460b4d6ba67b9c95c2bb547d9719c1b73f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab25a3c7c4045fad0a590c4b54fc45460b4d6ba67b9c95c2bb547d9719c1b73f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:37:20 compute-0 podman[98300]: 2026-01-23 18:37:20.121298994 +0000 UTC m=+0.033553002 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:37:20 compute-0 podman[98300]: 2026-01-23 18:37:20.215834874 +0000 UTC m=+0.128088892 container init e052e699202cfbc9d2ee461d14a0b534f4e89f17cc4cddb2c6916fb3d9c12519 (image=quay.io/ceph/ceph:v20, name=magical_wescoff, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 23 18:37:20 compute-0 podman[98300]: 2026-01-23 18:37:20.223101574 +0000 UTC m=+0.135355572 container start e052e699202cfbc9d2ee461d14a0b534f4e89f17cc4cddb2c6916fb3d9c12519 (image=quay.io/ceph/ceph:v20, name=magical_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:37:20 compute-0 podman[98300]: 2026-01-23 18:37:20.226077787 +0000 UTC m=+0.138331795 container attach e052e699202cfbc9d2ee461d14a0b534f4e89f17cc4cddb2c6916fb3d9c12519 (image=quay.io/ceph/ceph:v20, name=magical_wescoff, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 18:37:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 23 18:37:20 compute-0 ceph-mon[75097]: 4.c scrub starts
Jan 23 18:37:20 compute-0 ceph-mon[75097]: 4.c scrub ok
Jan 23 18:37:20 compute-0 ceph-mon[75097]: pgmap v148: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 80 B/s, 1 objects/s recovering
Jan 23 18:37:20 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 23 18:37:20 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 23 18:37:20 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 23 18:37:20 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 23 18:37:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 23 18:37:20 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 23 18:37:20 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 73 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=56/57 n=1 ec=44/23 lis/c=56/56 les/c/f=57/57/0 sis=73 pruub=9.139784813s) [1] r=-1 lpr=73 pi=[56,73)/1 crt=36'39 active pruub 128.520935059s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:20 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 73 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=56/57 n=1 ec=44/23 lis/c=56/56 les/c/f=57/57/0 sis=73 pruub=9.139618874s) [1] r=-1 lpr=73 pi=[56,73)/1 crt=36'39 unknown NOTIFY pruub 128.520935059s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:20 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 73 pg[6.b( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=56/56 les/c/f=57/57/0 sis=73) [1] r=0 lpr=73 pi=[56,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:20 compute-0 magical_wescoff[98315]: could not fetch user info: no user info saved
Jan 23 18:37:20 compute-0 systemd[1]: libpod-e052e699202cfbc9d2ee461d14a0b534f4e89f17cc4cddb2c6916fb3d9c12519.scope: Deactivated successfully.
Jan 23 18:37:20 compute-0 podman[98300]: 2026-01-23 18:37:20.541607077 +0000 UTC m=+0.453861075 container died e052e699202cfbc9d2ee461d14a0b534f4e89f17cc4cddb2c6916fb3d9c12519 (image=quay.io/ceph/ceph:v20, name=magical_wescoff, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:37:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab25a3c7c4045fad0a590c4b54fc45460b4d6ba67b9c95c2bb547d9719c1b73f-merged.mount: Deactivated successfully.
Jan 23 18:37:20 compute-0 podman[98300]: 2026-01-23 18:37:20.578819418 +0000 UTC m=+0.491073416 container remove e052e699202cfbc9d2ee461d14a0b534f4e89f17cc4cddb2c6916fb3d9c12519 (image=quay.io/ceph/ceph:v20, name=magical_wescoff, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:37:20 compute-0 systemd[1]: libpod-conmon-e052e699202cfbc9d2ee461d14a0b534f4e89f17cc4cddb2c6916fb3d9c12519.scope: Deactivated successfully.
Jan 23 18:37:20 compute-0 sudo[98297]: pam_unix(sudo:session): session closed for user root
Jan 23 18:37:20 compute-0 sudo[98434]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzggjmkwgkejhimldsauzsyhynmtyzav ; /usr/bin/python3'
Jan 23 18:37:20 compute-0 sudo[98434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:37:20 compute-0 python3[98436]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:37:20 compute-0 podman[98437]: 2026-01-23 18:37:20.945883673 +0000 UTC m=+0.049248950 container create 8ce82989da9d9959ada53098b2a463566e7dff2b62d637b15fc17b341e63d2b5 (image=quay.io/ceph/ceph:v20, name=frosty_thompson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:37:20 compute-0 systemd[1]: Started libpod-conmon-8ce82989da9d9959ada53098b2a463566e7dff2b62d637b15fc17b341e63d2b5.scope.
Jan 23 18:37:21 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea99f5292828186f265e5a6ca13c782e2e517295de7a06191f21810dd00f149a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea99f5292828186f265e5a6ca13c782e2e517295de7a06191f21810dd00f149a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:37:21 compute-0 podman[98437]: 2026-01-23 18:37:20.927886837 +0000 UTC m=+0.031252144 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 23 18:37:21 compute-0 podman[98437]: 2026-01-23 18:37:21.030463377 +0000 UTC m=+0.133828674 container init 8ce82989da9d9959ada53098b2a463566e7dff2b62d637b15fc17b341e63d2b5 (image=quay.io/ceph/ceph:v20, name=frosty_thompson, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:37:21 compute-0 podman[98437]: 2026-01-23 18:37:21.035785198 +0000 UTC m=+0.139150475 container start 8ce82989da9d9959ada53098b2a463566e7dff2b62d637b15fc17b341e63d2b5 (image=quay.io/ceph/ceph:v20, name=frosty_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 23 18:37:21 compute-0 podman[98437]: 2026-01-23 18:37:21.039536951 +0000 UTC m=+0.142902228 container attach 8ce82989da9d9959ada53098b2a463566e7dff2b62d637b15fc17b341e63d2b5 (image=quay.io/ceph/ceph:v20, name=frosty_thompson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:37:21 compute-0 frosty_thompson[98453]: {
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "user_id": "openstack",
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "display_name": "openstack",
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "email": "",
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "suspended": 0,
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "max_buckets": 1000,
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "subusers": [],
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "keys": [
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:         {
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:             "user": "openstack",
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:             "access_key": "XFHKGWDCIO6MJTV251GO",
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:             "secret_key": "Pmzz5XtFdGsUAEImetCbesMbYea0RvazL9PAUWIr",
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:             "active": true,
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:             "create_date": "2026-01-23T18:37:21.257081Z"
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:         }
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     ],
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "swift_keys": [],
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "caps": [],
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "op_mask": "read, write, delete",
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "default_placement": "",
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "default_storage_class": "",
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "placement_tags": [],
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "bucket_quota": {
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:         "enabled": false,
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:         "check_on_raw": false,
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:         "max_size": -1,
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:         "max_size_kb": 0,
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:         "max_objects": -1
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     },
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "user_quota": {
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:         "enabled": false,
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:         "check_on_raw": false,
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:         "max_size": -1,
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:         "max_size_kb": 0,
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:         "max_objects": -1
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     },
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "temp_url_keys": [],
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "type": "rgw",
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "mfa_ids": [],
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "account_id": "",
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "path": "/",
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "create_date": "2026-01-23T18:37:21.256828Z",
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "tags": [],
Jan 23 18:37:21 compute-0 frosty_thompson[98453]:     "group_ids": []
Jan 23 18:37:21 compute-0 frosty_thompson[98453]: }
Jan 23 18:37:21 compute-0 frosty_thompson[98453]: 
Jan 23 18:37:21 compute-0 systemd[1]: libpod-8ce82989da9d9959ada53098b2a463566e7dff2b62d637b15fc17b341e63d2b5.scope: Deactivated successfully.
Jan 23 18:37:21 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 23 18:37:21 compute-0 podman[98539]: 2026-01-23 18:37:21.327179231 +0000 UTC m=+0.024002396 container died 8ce82989da9d9959ada53098b2a463566e7dff2b62d637b15fc17b341e63d2b5 (image=quay.io/ceph/ceph:v20, name=frosty_thompson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 18:37:21 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 23 18:37:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea99f5292828186f265e5a6ca13c782e2e517295de7a06191f21810dd00f149a-merged.mount: Deactivated successfully.
Jan 23 18:37:21 compute-0 podman[98539]: 2026-01-23 18:37:21.367100129 +0000 UTC m=+0.063923274 container remove 8ce82989da9d9959ada53098b2a463566e7dff2b62d637b15fc17b341e63d2b5 (image=quay.io/ceph/ceph:v20, name=frosty_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 18:37:21 compute-0 systemd[1]: libpod-conmon-8ce82989da9d9959ada53098b2a463566e7dff2b62d637b15fc17b341e63d2b5.scope: Deactivated successfully.
Jan 23 18:37:21 compute-0 sudo[98434]: pam_unix(sudo:session): session closed for user root
Jan 23 18:37:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 23 18:37:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 23 18:37:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 23 18:37:21 compute-0 ceph-mon[75097]: osdmap e73: 3 total, 3 up, 3 in
Jan 23 18:37:21 compute-0 ceph-mon[75097]: 5.17 scrub starts
Jan 23 18:37:21 compute-0 ceph-mon[75097]: 5.17 scrub ok
Jan 23 18:37:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 23 18:37:21 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 23 18:37:21 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 74 pg[6.b( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=73/74 n=1 ec=44/23 lis/c=56/56 les/c/f=57/57/0 sis=73) [1] r=0 lpr=73 pi=[56,73)/1 crt=36'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:21 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 23 18:37:21 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 23 18:37:21 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v151: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:37:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 23 18:37:21 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 23 18:37:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 23 18:37:21 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 23 18:37:22 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Jan 23 18:37:22 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Jan 23 18:37:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 23 18:37:22 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 23 18:37:22 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 23 18:37:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 23 18:37:22 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 23 18:37:22 compute-0 ceph-mon[75097]: osdmap e74: 3 total, 3 up, 3 in
Jan 23 18:37:22 compute-0 ceph-mon[75097]: 4.15 scrub starts
Jan 23 18:37:22 compute-0 ceph-mon[75097]: 4.15 scrub ok
Jan 23 18:37:22 compute-0 ceph-mon[75097]: pgmap v151: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:37:22 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 23 18:37:22 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 23 18:37:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:37:23 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 23 18:37:23 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 23 18:37:23 compute-0 ceph-mon[75097]: 8.3 scrub starts
Jan 23 18:37:23 compute-0 ceph-mon[75097]: 8.3 scrub ok
Jan 23 18:37:23 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 23 18:37:23 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 23 18:37:23 compute-0 ceph-mon[75097]: osdmap e75: 3 total, 3 up, 3 in
Jan 23 18:37:23 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 23 18:37:23 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 23 18:37:23 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v153: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:37:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 23 18:37:23 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 23 18:37:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 23 18:37:23 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 23 18:37:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 23 18:37:24 compute-0 ceph-mon[75097]: 5.8 scrub starts
Jan 23 18:37:24 compute-0 ceph-mon[75097]: 5.8 scrub ok
Jan 23 18:37:24 compute-0 ceph-mon[75097]: 4.16 scrub starts
Jan 23 18:37:24 compute-0 ceph-mon[75097]: 4.16 scrub ok
Jan 23 18:37:24 compute-0 ceph-mon[75097]: pgmap v153: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:37:24 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 23 18:37:24 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 23 18:37:24 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 23 18:37:24 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 23 18:37:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 23 18:37:24 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 23 18:37:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 23 18:37:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 23 18:37:25 compute-0 ceph-mon[75097]: osdmap e76: 3 total, 3 up, 3 in
Jan 23 18:37:25 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v155: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 189 B/s wr, 52 op/s; 0 B/s, 0 objects/s recovering
Jan 23 18:37:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 23 18:37:25 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 23 18:37:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 23 18:37:25 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 23 18:37:26 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 76 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=60/62 n=1 ec=44/23 lis/c=60/60 les/c/f=62/62/0 sis=76 pruub=10.667929649s) [1] r=-1 lpr=76 pi=[60,76)/1 crt=36'39 active pruub 135.597763062s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:26 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 76 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=60/62 n=1 ec=44/23 lis/c=60/60 les/c/f=62/62/0 sis=76 pruub=10.667872429s) [1] r=-1 lpr=76 pi=[60,76)/1 crt=36'39 unknown NOTIFY pruub 135.597763062s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:26 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 76 pg[6.d( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=60/60 les/c/f=62/62/0 sis=76) [1] r=0 lpr=76 pi=[60,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:26 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 75 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=75 pruub=9.833750725s) [2] r=-1 lpr=75 pi=[46,75)/1 crt=41'483 lcod 0'0 active pruub 129.604568481s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:26 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 76 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=75 pruub=9.833559036s) [2] r=-1 lpr=75 pi=[46,75)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 129.604568481s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:26 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 75 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=75 pruub=9.837334633s) [2] r=-1 lpr=75 pi=[46,75)/1 crt=73'486 lcod 73'486 active pruub 129.608993530s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:26 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 76 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=75 pruub=9.837291718s) [2] r=-1 lpr=75 pi=[46,75)/1 crt=73'486 lcod 73'486 unknown NOTIFY pruub 129.608993530s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:26 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=75) [2] r=0 lpr=76 pi=[46,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:26 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=75) [2] r=0 lpr=76 pi=[46,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 23 18:37:26 compute-0 ceph-mon[75097]: pgmap v155: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 189 B/s wr, 52 op/s; 0 B/s, 0 objects/s recovering
Jan 23 18:37:26 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 23 18:37:26 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 23 18:37:26 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 23 18:37:26 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 23 18:37:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 23 18:37:26 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 23 18:37:26 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 77 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=77) [2]/[1] r=0 lpr=77 pi=[46,77)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:26 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 77 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=46/47 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=77) [2]/[1] r=0 lpr=77 pi=[46,77)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:26 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 77 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=77) [2]/[1] r=0 lpr=77 pi=[46,77)/1 crt=73'486 lcod 73'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:26 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 77 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=46/47 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=77) [2]/[1] r=0 lpr=77 pi=[46,77)/1 crt=73'486 lcod 73'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:26 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 77 pg[6.d( v 36'39 lc 35'8 (0'0,36'39] local-lis/les=76/77 n=1 ec=44/23 lis/c=60/60 les/c/f=62/62/0 sis=76) [1] r=0 lpr=76 pi=[60,76)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:26 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[46,77)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:26 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[46,77)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:26 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[46,77)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:26 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[46,77)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:27 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 23 18:37:27 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 23 18:37:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 23 18:37:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 23 18:37:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 23 18:37:27 compute-0 ceph-mon[75097]: osdmap e77: 3 total, 3 up, 3 in
Jan 23 18:37:27 compute-0 ceph-mon[75097]: 2.e scrub starts
Jan 23 18:37:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 23 18:37:27 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 23 18:37:27 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v158: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 190 B/s wr, 52 op/s; 0 B/s, 0 objects/s recovering
Jan 23 18:37:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 23 18:37:27 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 23 18:37:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 23 18:37:27 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 23 18:37:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:37:28 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 78 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=77/78 n=6 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[46,77)/1 crt=73'487 lcod 73'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:28 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 78 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=77/78 n=7 ec=46/34 lis/c=46/46 les/c/f=47/47/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[46,77)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 23 18:37:28 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 23 18:37:28 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 23 18:37:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 23 18:37:28 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 23 18:37:28 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 79 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=77/78 n=7 ec=46/34 lis/c=77/46 les/c/f=78/47/0 sis=79 pruub=15.782017708s) [2] async=[2] r=-1 lpr=79 pi=[46,79)/1 crt=41'483 lcod 0'0 active pruub 138.114578247s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:28 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 79 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=77/78 n=7 ec=46/34 lis/c=77/46 les/c/f=78/47/0 sis=79 pruub=15.781957626s) [2] r=-1 lpr=79 pi=[46,79)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 138.114578247s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:28 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 79 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=77/78 n=6 ec=46/34 lis/c=77/46 les/c/f=78/47/0 sis=79 pruub=15.779261589s) [2] async=[2] r=-1 lpr=79 pi=[46,79)/1 crt=73'487 lcod 73'486 active pruub 138.112731934s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:28 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 79 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=77/78 n=6 ec=46/34 lis/c=77/46 les/c/f=78/47/0 sis=79 pruub=15.779185295s) [2] r=-1 lpr=79 pi=[46,79)/1 crt=73'487 lcod 73'486 unknown NOTIFY pruub 138.112731934s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:28 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 79 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=0/0 n=6 ec=46/34 lis/c=77/46 les/c/f=78/47/0 sis=79) [2] r=0 lpr=79 pi=[46,79)/1 pct=0'0 crt=73'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:28 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 79 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=0/0 n=6 ec=46/34 lis/c=77/46 les/c/f=78/47/0 sis=79) [2] r=0 lpr=79 pi=[46,79)/1 crt=73'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:28 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 79 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=77/46 les/c/f=78/47/0 sis=79) [2] r=0 lpr=79 pi=[46,79)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:28 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 79 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=46/34 lis/c=77/46 les/c/f=78/47/0 sis=79) [2] r=0 lpr=79 pi=[46,79)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:28 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 79 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=56/57 n=1 ec=44/23 lis/c=56/56 les/c/f=57/57/0 sis=79 pruub=9.015127182s) [2] r=-1 lpr=79 pi=[56,79)/1 crt=36'39 active pruub 136.523132324s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:28 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 79 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=56/57 n=1 ec=44/23 lis/c=56/56 les/c/f=57/57/0 sis=79 pruub=9.015077591s) [2] r=-1 lpr=79 pi=[56,79)/1 crt=36'39 unknown NOTIFY pruub 136.523132324s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:28 compute-0 ceph-mon[75097]: 2.e scrub ok
Jan 23 18:37:28 compute-0 ceph-mon[75097]: osdmap e78: 3 total, 3 up, 3 in
Jan 23 18:37:28 compute-0 ceph-mon[75097]: pgmap v158: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 190 B/s wr, 52 op/s; 0 B/s, 0 objects/s recovering
Jan 23 18:37:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 23 18:37:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 23 18:37:28 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 79 pg[6.f( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=56/56 les/c/f=57/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:37:28
Jan 23 18:37:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:37:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:37:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'volumes', 'images', 'vms', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta']
Jan 23 18:37:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:37:29 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 23 18:37:29 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 23 18:37:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 23 18:37:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 23 18:37:29 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 23 18:37:29 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 80 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=79/80 n=7 ec=46/34 lis/c=77/46 les/c/f=78/47/0 sis=79) [2] r=0 lpr=79 pi=[46,79)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:29 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 80 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=79/80 n=6 ec=46/34 lis/c=77/46 les/c/f=78/47/0 sis=79) [2] r=0 lpr=79 pi=[46,79)/1 crt=73'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:29 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 80 pg[6.f( v 36'39 lc 35'1 (0'0,36'39] local-lis/les=79/80 n=1 ec=44/23 lis/c=56/56 les/c/f=57/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:29 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 23 18:37:29 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 23 18:37:29 compute-0 ceph-mon[75097]: osdmap e79: 3 total, 3 up, 3 in
Jan 23 18:37:29 compute-0 ceph-mon[75097]: osdmap e80: 3 total, 3 up, 3 in
Jan 23 18:37:29 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v161: 274 pgs: 3 peering, 271 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 121 B/s, 3 objects/s recovering
Jan 23 18:37:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:37:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:37:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:37:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:37:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:37:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:37:30 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 23 18:37:30 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 23 18:37:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:37:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:37:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:37:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:37:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:37:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:37:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:37:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:37:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:37:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:37:30 compute-0 ceph-mon[75097]: 8.1 scrub starts
Jan 23 18:37:30 compute-0 ceph-mon[75097]: 8.1 scrub ok
Jan 23 18:37:30 compute-0 ceph-mon[75097]: pgmap v161: 274 pgs: 3 peering, 271 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 121 B/s, 3 objects/s recovering
Jan 23 18:37:31 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 23 18:37:31 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 23 18:37:31 compute-0 ceph-mon[75097]: 5.a scrub starts
Jan 23 18:37:31 compute-0 ceph-mon[75097]: 5.a scrub ok
Jan 23 18:37:31 compute-0 ceph-mon[75097]: 2.c scrub starts
Jan 23 18:37:31 compute-0 ceph-mon[75097]: 2.c scrub ok
Jan 23 18:37:31 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v162: 274 pgs: 3 peering, 271 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 92 B/s, 3 objects/s recovering
Jan 23 18:37:32 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Jan 23 18:37:32 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Jan 23 18:37:32 compute-0 ceph-mon[75097]: pgmap v162: 274 pgs: 3 peering, 271 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 92 B/s, 3 objects/s recovering
Jan 23 18:37:32 compute-0 ceph-mon[75097]: 10.3 scrub starts
Jan 23 18:37:32 compute-0 ceph-mon[75097]: 10.3 scrub ok
Jan 23 18:37:32 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 23 18:37:32 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 23 18:37:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:37:33 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 23 18:37:33 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 23 18:37:33 compute-0 ceph-mon[75097]: 4.17 scrub starts
Jan 23 18:37:33 compute-0 ceph-mon[75097]: 4.17 scrub ok
Jan 23 18:37:33 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v163: 274 pgs: 3 peering, 271 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 77 B/s, 2 objects/s recovering
Jan 23 18:37:34 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 23 18:37:34 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 23 18:37:34 compute-0 ceph-mon[75097]: 8.0 scrub starts
Jan 23 18:37:34 compute-0 ceph-mon[75097]: 8.0 scrub ok
Jan 23 18:37:34 compute-0 ceph-mon[75097]: pgmap v163: 274 pgs: 3 peering, 271 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 77 B/s, 2 objects/s recovering
Jan 23 18:37:35 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 23 18:37:35 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v164: 274 pgs: 274 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 137 B/s, 2 objects/s recovering
Jan 23 18:37:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 23 18:37:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 23 18:37:35 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 23 18:37:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 23 18:37:36 compute-0 ceph-mon[75097]: 3.b scrub starts
Jan 23 18:37:36 compute-0 ceph-mon[75097]: 3.b scrub ok
Jan 23 18:37:36 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 23 18:37:36 compute-0 sshd-session[98554]: Accepted publickey for zuul from 192.168.122.30 port 41568 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:37:36 compute-0 systemd-logind[792]: New session 33 of user zuul.
Jan 23 18:37:36 compute-0 systemd[1]: Started Session 33 of User zuul.
Jan 23 18:37:36 compute-0 sshd-session[98554]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:37:36 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 23 18:37:36 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 23 18:37:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 23 18:37:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 23 18:37:36 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 23 18:37:36 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 23 18:37:36 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 23 18:37:37 compute-0 ceph-mon[75097]: 5.1e scrub starts
Jan 23 18:37:37 compute-0 ceph-mon[75097]: pgmap v164: 274 pgs: 274 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 137 B/s, 2 objects/s recovering
Jan 23 18:37:37 compute-0 ceph-mon[75097]: 5.1e scrub ok
Jan 23 18:37:37 compute-0 ceph-mon[75097]: 3.4 scrub starts
Jan 23 18:37:37 compute-0 ceph-mon[75097]: 3.4 scrub ok
Jan 23 18:37:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 23 18:37:37 compute-0 ceph-mon[75097]: osdmap e81: 3 total, 3 up, 3 in
Jan 23 18:37:37 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 23 18:37:37 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 23 18:37:37 compute-0 python3.9[98707]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:37:37 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 23 18:37:37 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 23 18:37:37 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v166: 274 pgs: 274 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 74 B/s, 0 objects/s recovering
Jan 23 18:37:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 23 18:37:37 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 23 18:37:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:37:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 23 18:37:38 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 23 18:37:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 23 18:37:38 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 23 18:37:38 compute-0 ceph-mon[75097]: 2.19 scrub starts
Jan 23 18:37:38 compute-0 ceph-mon[75097]: 2.19 scrub ok
Jan 23 18:37:38 compute-0 ceph-mon[75097]: 5.b scrub starts
Jan 23 18:37:38 compute-0 ceph-mon[75097]: 5.b scrub ok
Jan 23 18:37:38 compute-0 ceph-mon[75097]: 7.0 scrub starts
Jan 23 18:37:38 compute-0 ceph-mon[75097]: 7.0 scrub ok
Jan 23 18:37:38 compute-0 ceph-mon[75097]: pgmap v166: 274 pgs: 274 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 74 B/s, 0 objects/s recovering
Jan 23 18:37:38 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1083052055734967e-06 of space, bias 4.0, pg target 0.002529966246688196 quantized to 16 (current 16)
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:37:38 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Jan 23 18:37:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Jan 23 18:37:38 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 23 18:37:38 compute-0 sudo[98923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zohjrxtpwmxrvrnsqnokdblxwxcpqujh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193458.3318357-27-176000830352814/AnsiballZ_command.py'
Jan 23 18:37:38 compute-0 sudo[98923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:37:38 compute-0 python3.9[98925]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:37:39 compute-0 sudo[98936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:37:39 compute-0 sudo[98936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:37:39 compute-0 sudo[98936]: pam_unix(sudo:session): session closed for user root
Jan 23 18:37:39 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v168: 274 pgs: 274 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 76 B/s, 0 objects/s recovering
Jan 23 18:37:39 compute-0 sudo[98961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:37:39 compute-0 sudo[98961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:37:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 23 18:37:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Jan 23 18:37:40 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 23 18:37:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 23 18:37:40 compute-0 ceph-mon[75097]: osdmap e82: 3 total, 3 up, 3 in
Jan 23 18:37:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 23 18:37:40 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 23 18:37:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 23 18:37:40 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 23 18:37:40 compute-0 ceph-mgr[75390]: [progress INFO root] update: starting ev b25faf51-ecff-4a57-8f54-a18b73c16765 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 23 18:37:40 compute-0 ceph-mgr[75390]: [progress INFO root] complete: finished ev b25faf51-ecff-4a57-8f54-a18b73c16765 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 23 18:37:40 compute-0 ceph-mgr[75390]: [progress INFO root] Completed event b25faf51-ecff-4a57-8f54-a18b73c16765 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 23 18:37:40 compute-0 sudo[98961]: pam_unix(sudo:session): session closed for user root
Jan 23 18:37:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:37:40 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:37:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:37:40 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:37:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:37:40 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:37:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:37:40 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:37:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:37:40 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:37:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:37:40 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:37:40 compute-0 sudo[99017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:37:40 compute-0 sudo[99017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:37:40 compute-0 sudo[99017]: pam_unix(sudo:session): session closed for user root
Jan 23 18:37:40 compute-0 sudo[99042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:37:40 compute-0 sudo[99042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:37:40 compute-0 podman[99081]: 2026-01-23 18:37:40.79101811 +0000 UTC m=+0.024316619 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:37:40 compute-0 ceph-mgr[75390]: [progress INFO root] Writing back 16 completed events
Jan 23 18:37:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 18:37:41 compute-0 podman[99081]: 2026-01-23 18:37:41.111714676 +0000 UTC m=+0.345013195 container create 09e370c310be66c7a155883b9f1e47c50963a24ad35c33dd909a1247dca6cf49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 23 18:37:41 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:37:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 23 18:37:41 compute-0 ceph-mon[75097]: pgmap v168: 274 pgs: 274 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 76 B/s, 0 objects/s recovering
Jan 23 18:37:41 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 23 18:37:41 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 23 18:37:41 compute-0 ceph-mon[75097]: osdmap e83: 3 total, 3 up, 3 in
Jan 23 18:37:41 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:37:41 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:37:41 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:37:41 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:37:41 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:37:41 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:37:41 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 23 18:37:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 23 18:37:41 compute-0 systemd[1]: Started libpod-conmon-09e370c310be66c7a155883b9f1e47c50963a24ad35c33dd909a1247dca6cf49.scope.
Jan 23 18:37:41 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 23 18:37:41 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:37:41 compute-0 podman[99081]: 2026-01-23 18:37:41.456773681 +0000 UTC m=+0.690072190 container init 09e370c310be66c7a155883b9f1e47c50963a24ad35c33dd909a1247dca6cf49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mccarthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:37:41 compute-0 podman[99081]: 2026-01-23 18:37:41.469738662 +0000 UTC m=+0.703037181 container start 09e370c310be66c7a155883b9f1e47c50963a24ad35c33dd909a1247dca6cf49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:37:41 compute-0 podman[99081]: 2026-01-23 18:37:41.474650515 +0000 UTC m=+0.707949134 container attach 09e370c310be66c7a155883b9f1e47c50963a24ad35c33dd909a1247dca6cf49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mccarthy, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 23 18:37:41 compute-0 youthful_mccarthy[99097]: 167 167
Jan 23 18:37:41 compute-0 systemd[1]: libpod-09e370c310be66c7a155883b9f1e47c50963a24ad35c33dd909a1247dca6cf49.scope: Deactivated successfully.
Jan 23 18:37:41 compute-0 podman[99081]: 2026-01-23 18:37:41.478786556 +0000 UTC m=+0.712085095 container died 09e370c310be66c7a155883b9f1e47c50963a24ad35c33dd909a1247dca6cf49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 18:37:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa14bfef41e4889dff55497bd26803e3ec48919717f0fdb407a8cf69cfb0556a-merged.mount: Deactivated successfully.
Jan 23 18:37:41 compute-0 podman[99102]: 2026-01-23 18:37:41.839951197 +0000 UTC m=+0.350097382 container remove 09e370c310be66c7a155883b9f1e47c50963a24ad35c33dd909a1247dca6cf49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mccarthy, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 23 18:37:41 compute-0 systemd[1]: libpod-conmon-09e370c310be66c7a155883b9f1e47c50963a24ad35c33dd909a1247dca6cf49.scope: Deactivated successfully.
Jan 23 18:37:41 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v171: 274 pgs: 274 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:37:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 18:37:41 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 23 18:37:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Jan 23 18:37:41 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 23 18:37:41 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 23 18:37:41 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 23 18:37:42 compute-0 podman[99124]: 2026-01-23 18:37:41.991329003 +0000 UTC m=+0.023067485 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:37:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 23 18:37:42 compute-0 podman[99124]: 2026-01-23 18:37:42.246378022 +0000 UTC m=+0.278116474 container create 6e0802d158cc789578a3dc865e226c07b10e9e982aaac09eb87070efb7c63cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:37:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:37:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 23 18:37:42 compute-0 ceph-mon[75097]: osdmap e84: 3 total, 3 up, 3 in
Jan 23 18:37:42 compute-0 ceph-mon[75097]: pgmap v171: 274 pgs: 274 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:37:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 23 18:37:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 23 18:37:42 compute-0 ceph-mon[75097]: 7.1f scrub starts
Jan 23 18:37:42 compute-0 ceph-mon[75097]: 7.1f scrub ok
Jan 23 18:37:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 18:37:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 23 18:37:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 23 18:37:42 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 23 18:37:42 compute-0 systemd[1]: Started libpod-conmon-6e0802d158cc789578a3dc865e226c07b10e9e982aaac09eb87070efb7c63cb8.scope.
Jan 23 18:37:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47cc9eb41e4d238edd264905ed8174b1938dd47d17cbc1152e0a66ec4b474bf8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47cc9eb41e4d238edd264905ed8174b1938dd47d17cbc1152e0a66ec4b474bf8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47cc9eb41e4d238edd264905ed8174b1938dd47d17cbc1152e0a66ec4b474bf8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47cc9eb41e4d238edd264905ed8174b1938dd47d17cbc1152e0a66ec4b474bf8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47cc9eb41e4d238edd264905ed8174b1938dd47d17cbc1152e0a66ec4b474bf8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:37:42 compute-0 podman[99124]: 2026-01-23 18:37:42.394383717 +0000 UTC m=+0.426122189 container init 6e0802d158cc789578a3dc865e226c07b10e9e982aaac09eb87070efb7c63cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 18:37:42 compute-0 podman[99124]: 2026-01-23 18:37:42.401768216 +0000 UTC m=+0.433506718 container start 6e0802d158cc789578a3dc865e226c07b10e9e982aaac09eb87070efb7c63cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_wilbur, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:37:42 compute-0 podman[99124]: 2026-01-23 18:37:42.406374702 +0000 UTC m=+0.438113174 container attach 6e0802d158cc789578a3dc865e226c07b10e9e982aaac09eb87070efb7c63cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:37:42 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 85 pg[11.0( v 73'2 (0'0,73'2] local-lis/les=38/39 n=2 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=85 pruub=8.941880226s) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 73'1 mlcod 73'1 active pruub 145.276702881s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:42 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 85 pg[11.0( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=85 pruub=8.941880226s) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 73'1 mlcod 0'0 unknown pruub 145.276702881s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:42 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 85 pg[9.13( v 73'485 (0'0,73'485] local-lis/les=53/54 n=6 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=85 pruub=11.847792625s) [2] r=-1 lpr=85 pi=[53,85)/1 crt=72'484 lcod 72'484 active pruub 153.493942261s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:42 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 85 pg[9.13( v 73'485 (0'0,73'485] local-lis/les=53/54 n=6 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=85 pruub=11.847323418s) [2] r=-1 lpr=85 pi=[53,85)/1 crt=72'484 lcod 72'484 unknown NOTIFY pruub 153.493942261s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:42 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 85 pg[9.13( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2] r=0 lpr=85 pi=[53,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:42 compute-0 competent_wilbur[99141]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:37:42 compute-0 competent_wilbur[99141]: --> All data devices are unavailable
Jan 23 18:37:42 compute-0 systemd[1]: libpod-6e0802d158cc789578a3dc865e226c07b10e9e982aaac09eb87070efb7c63cb8.scope: Deactivated successfully.
Jan 23 18:37:42 compute-0 podman[99166]: 2026-01-23 18:37:42.954467439 +0000 UTC m=+0.025149581 container died 6e0802d158cc789578a3dc865e226c07b10e9e982aaac09eb87070efb7c63cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_wilbur, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:37:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:37:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 23 18:37:43 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 23 18:37:43 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 23 18:37:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-47cc9eb41e4d238edd264905ed8174b1938dd47d17cbc1152e0a66ec4b474bf8-merged.mount: Deactivated successfully.
Jan 23 18:37:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 23 18:37:43 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 23 18:37:43 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 31 unknown, 274 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:37:44 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 86 pg[9.13( v 73'485 (0'0,73'485] local-lis/les=53/54 n=6 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2]/[0] r=0 lpr=86 pi=[53,86)/1 crt=72'484 lcod 72'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:44 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 86 pg[9.13( v 73'485 (0'0,73'485] local-lis/les=53/54 n=6 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2]/[0] r=0 lpr=86 pi=[53,86)/1 crt=72'484 lcod 72'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 23 18:37:44 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 23 18:37:44 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2]/[0] r=-1 lpr=86 pi=[53,86)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:44 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2]/[0] r=-1 lpr=86 pi=[53,86)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.17( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.16( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.15( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.19( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.14( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.12( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.13( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.11( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.10( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.f( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.e( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.d( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.b( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.2( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=1 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.9( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.3( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.c( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.8( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.a( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.1( v 73'2 (0'0,73'2] local-lis/les=38/39 n=1 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.4( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.5( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.6( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.7( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.1a( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.1b( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.18( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.1c( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.1d( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.1e( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.1f( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=38/39 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 23 18:37:44 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 18:37:44 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 23 18:37:44 compute-0 ceph-mon[75097]: osdmap e85: 3 total, 3 up, 3 in
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.16( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.19( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.17( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.15( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.14( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.10( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.11( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.13( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.12( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.f( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.b( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.d( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.e( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.3( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.2( v 73'2 (0'0,73'2] local-lis/les=85/86 n=1 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.9( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.0( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 73'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.8( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.c( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.1( v 73'2 (0'0,73'2] local-lis/les=85/86 n=1 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.4( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.5( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.a( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.6( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.7( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.1b( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.1c( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.1a( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.1d( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.18( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.1f( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 86 pg[11.1e( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=38/38 les/c/f=39/39/0 sis=85) [1] r=0 lpr=85 pi=[38,85)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:44 compute-0 podman[99166]: 2026-01-23 18:37:44.957657243 +0000 UTC m=+2.028339405 container remove 6e0802d158cc789578a3dc865e226c07b10e9e982aaac09eb87070efb7c63cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_wilbur, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:37:44 compute-0 systemd[1]: libpod-conmon-6e0802d158cc789578a3dc865e226c07b10e9e982aaac09eb87070efb7c63cb8.scope: Deactivated successfully.
Jan 23 18:37:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 23 18:37:44 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 23 18:37:44 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 87 pg[9.13( v 73'485 (0'0,73'485] local-lis/les=86/87 n=6 ec=46/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2]/[0] async=[2] r=0 lpr=86 pi=[53,86)/1 crt=73'485 lcod 72'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:45 compute-0 sudo[99042]: pam_unix(sudo:session): session closed for user root
Jan 23 18:37:45 compute-0 sudo[99186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:37:45 compute-0 sudo[99186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:37:45 compute-0 sudo[99186]: pam_unix(sudo:session): session closed for user root
Jan 23 18:37:45 compute-0 sudo[99211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:37:45 compute-0 sudo[99211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:37:45 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 23 18:37:45 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 23 18:37:45 compute-0 podman[99248]: 2026-01-23 18:37:45.45876104 +0000 UTC m=+0.041977147 container create bd6d9f2c46e9fc150d9496c88895dbf037aec19d24412bbaeefff73766af9bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 23 18:37:45 compute-0 systemd[1]: Started libpod-conmon-bd6d9f2c46e9fc150d9496c88895dbf037aec19d24412bbaeefff73766af9bad.scope.
Jan 23 18:37:45 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:37:45 compute-0 podman[99248]: 2026-01-23 18:37:45.439671193 +0000 UTC m=+0.022887320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:37:45 compute-0 podman[99248]: 2026-01-23 18:37:45.606758003 +0000 UTC m=+0.189974190 container init bd6d9f2c46e9fc150d9496c88895dbf037aec19d24412bbaeefff73766af9bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ptolemy, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:37:45 compute-0 podman[99248]: 2026-01-23 18:37:45.615798168 +0000 UTC m=+0.199014285 container start bd6d9f2c46e9fc150d9496c88895dbf037aec19d24412bbaeefff73766af9bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ptolemy, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:37:45 compute-0 ecstatic_ptolemy[99265]: 167 167
Jan 23 18:37:45 compute-0 systemd[1]: libpod-bd6d9f2c46e9fc150d9496c88895dbf037aec19d24412bbaeefff73766af9bad.scope: Deactivated successfully.
Jan 23 18:37:45 compute-0 conmon[99265]: conmon bd6d9f2c46e9fc150d94 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd6d9f2c46e9fc150d9496c88895dbf037aec19d24412bbaeefff73766af9bad.scope/container/memory.events
Jan 23 18:37:45 compute-0 podman[99248]: 2026-01-23 18:37:45.623303081 +0000 UTC m=+0.206519188 container attach bd6d9f2c46e9fc150d9496c88895dbf037aec19d24412bbaeefff73766af9bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ptolemy, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 23 18:37:45 compute-0 podman[99248]: 2026-01-23 18:37:45.623713002 +0000 UTC m=+0.206929109 container died bd6d9f2c46e9fc150d9496c88895dbf037aec19d24412bbaeefff73766af9bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:37:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-76b11c74d29d4cb974497395bf63cbe7ff7b56a326f66c169b9b16daded46234-merged.mount: Deactivated successfully.
Jan 23 18:37:45 compute-0 podman[99248]: 2026-01-23 18:37:45.669431379 +0000 UTC m=+0.252647486 container remove bd6d9f2c46e9fc150d9496c88895dbf037aec19d24412bbaeefff73766af9bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ptolemy, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:37:45 compute-0 systemd[1]: libpod-conmon-bd6d9f2c46e9fc150d9496c88895dbf037aec19d24412bbaeefff73766af9bad.scope: Deactivated successfully.
Jan 23 18:37:45 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 1 peering, 1 remapped+peering, 31 unknown, 272 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:37:45 compute-0 podman[99290]: 2026-01-23 18:37:45.812927022 +0000 UTC m=+0.023042025 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:37:45 compute-0 podman[99290]: 2026-01-23 18:37:45.907362506 +0000 UTC m=+0.117477469 container create 30fae68257eb6fee3f90f6111187821b36f7e0a1739f73e69c04a3792bbeb505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:37:45 compute-0 ceph-mon[75097]: 3.0 scrub starts
Jan 23 18:37:45 compute-0 ceph-mon[75097]: 3.0 scrub ok
Jan 23 18:37:45 compute-0 ceph-mon[75097]: osdmap e86: 3 total, 3 up, 3 in
Jan 23 18:37:45 compute-0 ceph-mon[75097]: pgmap v174: 305 pgs: 31 unknown, 274 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:37:45 compute-0 ceph-mon[75097]: 10.5 scrub starts
Jan 23 18:37:45 compute-0 ceph-mon[75097]: 3.2 scrub starts
Jan 23 18:37:45 compute-0 ceph-mon[75097]: 10.5 scrub ok
Jan 23 18:37:45 compute-0 ceph-mon[75097]: 3.2 scrub ok
Jan 23 18:37:45 compute-0 ceph-mon[75097]: osdmap e87: 3 total, 3 up, 3 in
Jan 23 18:37:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 23 18:37:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 23 18:37:45 compute-0 systemd[1]: Started libpod-conmon-30fae68257eb6fee3f90f6111187821b36f7e0a1739f73e69c04a3792bbeb505.scope.
Jan 23 18:37:45 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 23 18:37:45 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 88 pg[9.13( v 73'485 (0'0,73'485] local-lis/les=86/87 n=6 ec=46/34 lis/c=86/53 les/c/f=87/54/0 sis=88 pruub=14.988589287s) [2] async=[2] r=-1 lpr=88 pi=[53,88)/1 crt=73'485 lcod 72'484 active pruub 159.886215210s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:45 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 88 pg[9.13( v 73'485 (0'0,73'485] local-lis/les=86/87 n=6 ec=46/34 lis/c=86/53 les/c/f=87/54/0 sis=88 pruub=14.988509178s) [2] r=-1 lpr=88 pi=[53,88)/1 crt=73'485 lcod 72'484 unknown NOTIFY pruub 159.886215210s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:45 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 88 pg[9.13( v 73'485 (0'0,73'485] local-lis/les=0/0 n=6 ec=46/34 lis/c=86/53 les/c/f=87/54/0 sis=88) [2] r=0 lpr=88 pi=[53,88)/1 pct=0'0 crt=73'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:45 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 88 pg[9.13( v 73'485 (0'0,73'485] local-lis/les=0/0 n=6 ec=46/34 lis/c=86/53 les/c/f=87/54/0 sis=88) [2] r=0 lpr=88 pi=[53,88)/1 crt=73'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:46 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4223945ddea9c99735fb9313a129dc268ac9178363c9f3bb0d3a10a73d109d30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4223945ddea9c99735fb9313a129dc268ac9178363c9f3bb0d3a10a73d109d30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4223945ddea9c99735fb9313a129dc268ac9178363c9f3bb0d3a10a73d109d30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4223945ddea9c99735fb9313a129dc268ac9178363c9f3bb0d3a10a73d109d30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:37:46 compute-0 podman[99290]: 2026-01-23 18:37:46.139001162 +0000 UTC m=+0.349116145 container init 30fae68257eb6fee3f90f6111187821b36f7e0a1739f73e69c04a3792bbeb505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_rubin, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:37:46 compute-0 podman[99290]: 2026-01-23 18:37:46.148046748 +0000 UTC m=+0.358161701 container start 30fae68257eb6fee3f90f6111187821b36f7e0a1739f73e69c04a3792bbeb505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:37:46 compute-0 podman[99290]: 2026-01-23 18:37:46.151868771 +0000 UTC m=+0.361983754 container attach 30fae68257eb6fee3f90f6111187821b36f7e0a1739f73e69c04a3792bbeb505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:37:46 compute-0 adoring_rubin[99307]: {
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:     "0": [
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:         {
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "devices": [
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "/dev/loop3"
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             ],
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "lv_name": "ceph_lv0",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "lv_size": "21470642176",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "name": "ceph_lv0",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "tags": {
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.cluster_name": "ceph",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.crush_device_class": "",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.encrypted": "0",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.objectstore": "bluestore",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.osd_id": "0",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.type": "block",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.vdo": "0",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.with_tpm": "0"
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             },
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "type": "block",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "vg_name": "ceph_vg0"
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:         }
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:     ],
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:     "1": [
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:         {
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "devices": [
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "/dev/loop4"
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             ],
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "lv_name": "ceph_lv1",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "lv_size": "21470642176",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "name": "ceph_lv1",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "tags": {
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.cluster_name": "ceph",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.crush_device_class": "",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.encrypted": "0",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.objectstore": "bluestore",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.osd_id": "1",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.type": "block",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.vdo": "0",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.with_tpm": "0"
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             },
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "type": "block",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "vg_name": "ceph_vg1"
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:         }
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:     ],
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:     "2": [
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:         {
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "devices": [
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "/dev/loop5"
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             ],
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "lv_name": "ceph_lv2",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "lv_size": "21470642176",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "name": "ceph_lv2",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "tags": {
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.cluster_name": "ceph",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.crush_device_class": "",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.encrypted": "0",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.objectstore": "bluestore",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.osd_id": "2",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.type": "block",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.vdo": "0",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:                 "ceph.with_tpm": "0"
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             },
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "type": "block",
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:             "vg_name": "ceph_vg2"
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:         }
Jan 23 18:37:46 compute-0 adoring_rubin[99307]:     ]
Jan 23 18:37:46 compute-0 adoring_rubin[99307]: }
Jan 23 18:37:46 compute-0 systemd[1]: libpod-30fae68257eb6fee3f90f6111187821b36f7e0a1739f73e69c04a3792bbeb505.scope: Deactivated successfully.
Jan 23 18:37:46 compute-0 podman[99290]: 2026-01-23 18:37:46.452455973 +0000 UTC m=+0.662570956 container died 30fae68257eb6fee3f90f6111187821b36f7e0a1739f73e69c04a3792bbeb505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 18:37:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4223945ddea9c99735fb9313a129dc268ac9178363c9f3bb0d3a10a73d109d30-merged.mount: Deactivated successfully.
Jan 23 18:37:46 compute-0 podman[99290]: 2026-01-23 18:37:46.510113293 +0000 UTC m=+0.720228246 container remove 30fae68257eb6fee3f90f6111187821b36f7e0a1739f73e69c04a3792bbeb505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:37:46 compute-0 systemd[1]: libpod-conmon-30fae68257eb6fee3f90f6111187821b36f7e0a1739f73e69c04a3792bbeb505.scope: Deactivated successfully.
Jan 23 18:37:46 compute-0 sudo[99211]: pam_unix(sudo:session): session closed for user root
Jan 23 18:37:46 compute-0 sudo[99331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:37:46 compute-0 sudo[99331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:37:46 compute-0 sudo[99331]: pam_unix(sudo:session): session closed for user root
Jan 23 18:37:46 compute-0 sudo[99356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:37:46 compute-0 sudo[99356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:37:46 compute-0 podman[99398]: 2026-01-23 18:37:46.974914467 +0000 UTC m=+0.039246643 container create 1453395e3449036b09387e89db0ee06f5b1fbbfe86799f2c4abdcda11c47a47b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 23 18:37:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 23 18:37:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 23 18:37:46 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 23 18:37:47 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 23 18:37:47 compute-0 ceph-mon[75097]: 2.0 scrub starts
Jan 23 18:37:47 compute-0 ceph-mon[75097]: 2.0 scrub ok
Jan 23 18:37:47 compute-0 ceph-mon[75097]: pgmap v176: 305 pgs: 1 peering, 1 remapped+peering, 31 unknown, 272 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:37:47 compute-0 ceph-mon[75097]: osdmap e88: 3 total, 3 up, 3 in
Jan 23 18:37:47 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 89 pg[9.13( v 73'485 (0'0,73'485] local-lis/les=88/89 n=6 ec=46/34 lis/c=86/53 les/c/f=87/54/0 sis=88) [2] r=0 lpr=88 pi=[53,88)/1 crt=73'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:47 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 23 18:37:47 compute-0 systemd[1]: Started libpod-conmon-1453395e3449036b09387e89db0ee06f5b1fbbfe86799f2c4abdcda11c47a47b.scope.
Jan 23 18:37:47 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:37:47 compute-0 podman[99398]: 2026-01-23 18:37:46.957731712 +0000 UTC m=+0.022063908 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:37:47 compute-0 podman[99398]: 2026-01-23 18:37:47.055913118 +0000 UTC m=+0.120245314 container init 1453395e3449036b09387e89db0ee06f5b1fbbfe86799f2c4abdcda11c47a47b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:37:47 compute-0 podman[99398]: 2026-01-23 18:37:47.061742186 +0000 UTC m=+0.126074392 container start 1453395e3449036b09387e89db0ee06f5b1fbbfe86799f2c4abdcda11c47a47b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_rosalind, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 23 18:37:47 compute-0 laughing_rosalind[99414]: 167 167
Jan 23 18:37:47 compute-0 systemd[1]: libpod-1453395e3449036b09387e89db0ee06f5b1fbbfe86799f2c4abdcda11c47a47b.scope: Deactivated successfully.
Jan 23 18:37:47 compute-0 podman[99398]: 2026-01-23 18:37:47.065847677 +0000 UTC m=+0.130179873 container attach 1453395e3449036b09387e89db0ee06f5b1fbbfe86799f2c4abdcda11c47a47b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_rosalind, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Jan 23 18:37:47 compute-0 podman[99398]: 2026-01-23 18:37:47.066406723 +0000 UTC m=+0.130738929 container died 1453395e3449036b09387e89db0ee06f5b1fbbfe86799f2c4abdcda11c47a47b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_rosalind, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:37:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c07a3bf8045046038d4742eb8c65faaf4652d389267fe787529aab51be01921-merged.mount: Deactivated successfully.
Jan 23 18:37:47 compute-0 podman[99398]: 2026-01-23 18:37:47.104915874 +0000 UTC m=+0.169248060 container remove 1453395e3449036b09387e89db0ee06f5b1fbbfe86799f2c4abdcda11c47a47b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_rosalind, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:37:47 compute-0 systemd[1]: libpod-conmon-1453395e3449036b09387e89db0ee06f5b1fbbfe86799f2c4abdcda11c47a47b.scope: Deactivated successfully.
Jan 23 18:37:47 compute-0 podman[99440]: 2026-01-23 18:37:47.256206527 +0000 UTC m=+0.031294397 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:37:47 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 23 18:37:47 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 23 18:37:47 compute-0 sudo[98923]: pam_unix(sudo:session): session closed for user root
Jan 23 18:37:47 compute-0 sshd-session[99455]: Invalid user  from 38.110.46.239 port 23006
Jan 23 18:37:47 compute-0 sshd-session[99455]: Connection closed by invalid user  38.110.46.239 port 23006 [preauth]
Jan 23 18:37:47 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 1 peering, 1 remapped+peering, 31 unknown, 272 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:37:47 compute-0 sshd-session[98557]: Connection closed by 192.168.122.30 port 41568
Jan 23 18:37:47 compute-0 sshd-session[98554]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:37:47 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Jan 23 18:37:47 compute-0 systemd[1]: session-33.scope: Consumed 8.246s CPU time.
Jan 23 18:37:47 compute-0 systemd-logind[792]: Session 33 logged out. Waiting for processes to exit.
Jan 23 18:37:47 compute-0 systemd-logind[792]: Removed session 33.
Jan 23 18:37:48 compute-0 podman[99440]: 2026-01-23 18:37:48.047451873 +0000 UTC m=+0.822539733 container create 91b9267db064b018aac07a104b324521ed85257ae6a0a2c1482c613c883d36c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_greider, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 18:37:48 compute-0 ceph-mon[75097]: osdmap e89: 3 total, 3 up, 3 in
Jan 23 18:37:48 compute-0 ceph-mon[75097]: 2.18 scrub starts
Jan 23 18:37:48 compute-0 ceph-mon[75097]: 2.18 scrub ok
Jan 23 18:37:48 compute-0 ceph-mon[75097]: 7.d scrub starts
Jan 23 18:37:48 compute-0 ceph-mon[75097]: 7.d scrub ok
Jan 23 18:37:48 compute-0 systemd[1]: Started libpod-conmon-91b9267db064b018aac07a104b324521ed85257ae6a0a2c1482c613c883d36c9.scope.
Jan 23 18:37:48 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f0a8d85b68dc87366b632e861326d24770fc6a6ef76c0fa9976da2cf0f801c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f0a8d85b68dc87366b632e861326d24770fc6a6ef76c0fa9976da2cf0f801c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f0a8d85b68dc87366b632e861326d24770fc6a6ef76c0fa9976da2cf0f801c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f0a8d85b68dc87366b632e861326d24770fc6a6ef76c0fa9976da2cf0f801c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:37:48 compute-0 podman[99440]: 2026-01-23 18:37:48.179691811 +0000 UTC m=+0.954779671 container init 91b9267db064b018aac07a104b324521ed85257ae6a0a2c1482c613c883d36c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:37:48 compute-0 podman[99440]: 2026-01-23 18:37:48.186878295 +0000 UTC m=+0.961966135 container start 91b9267db064b018aac07a104b324521ed85257ae6a0a2c1482c613c883d36c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 23 18:37:48 compute-0 podman[99440]: 2026-01-23 18:37:48.209850396 +0000 UTC m=+0.984938236 container attach 91b9267db064b018aac07a104b324521ed85257ae6a0a2c1482c613c883d36c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_greider, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 23 18:37:48 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 23 18:37:48 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 23 18:37:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:37:48 compute-0 lvm[99562]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:37:48 compute-0 lvm[99561]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:37:48 compute-0 lvm[99562]: VG ceph_vg1 finished
Jan 23 18:37:48 compute-0 lvm[99561]: VG ceph_vg0 finished
Jan 23 18:37:48 compute-0 lvm[99564]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:37:48 compute-0 lvm[99564]: VG ceph_vg2 finished
Jan 23 18:37:49 compute-0 laughing_greider[99483]: {}
Jan 23 18:37:49 compute-0 systemd[1]: libpod-91b9267db064b018aac07a104b324521ed85257ae6a0a2c1482c613c883d36c9.scope: Deactivated successfully.
Jan 23 18:37:49 compute-0 podman[99440]: 2026-01-23 18:37:49.038495195 +0000 UTC m=+1.813583125 container died 91b9267db064b018aac07a104b324521ed85257ae6a0a2c1482c613c883d36c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_greider, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 23 18:37:49 compute-0 systemd[1]: libpod-91b9267db064b018aac07a104b324521ed85257ae6a0a2c1482c613c883d36c9.scope: Consumed 1.343s CPU time.
Jan 23 18:37:49 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Jan 23 18:37:49 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Jan 23 18:37:49 compute-0 ceph-mon[75097]: pgmap v179: 305 pgs: 1 peering, 1 remapped+peering, 31 unknown, 272 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:37:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f0a8d85b68dc87366b632e861326d24770fc6a6ef76c0fa9976da2cf0f801c2-merged.mount: Deactivated successfully.
Jan 23 18:37:49 compute-0 podman[99440]: 2026-01-23 18:37:49.715805299 +0000 UTC m=+2.490893149 container remove 91b9267db064b018aac07a104b324521ed85257ae6a0a2c1482c613c883d36c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 18:37:49 compute-0 systemd[1]: libpod-conmon-91b9267db064b018aac07a104b324521ed85257ae6a0a2c1482c613c883d36c9.scope: Deactivated successfully.
Jan 23 18:37:49 compute-0 sudo[99356]: pam_unix(sudo:session): session closed for user root
Jan 23 18:37:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:37:49 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v180: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 0 objects/s recovering
Jan 23 18:37:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Jan 23 18:37:49 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 23 18:37:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 18:37:49 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 23 18:37:49 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:37:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:37:49 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:37:50 compute-0 sudo[99580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:37:50 compute-0 sudo[99580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:37:50 compute-0 sudo[99580]: pam_unix(sudo:session): session closed for user root
Jan 23 18:37:50 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 23 18:37:50 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 23 18:37:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 23 18:37:50 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 23 18:37:50 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 18:37:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 23 18:37:50 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 23 18:37:50 compute-0 ceph-mon[75097]: 5.0 scrub starts
Jan 23 18:37:50 compute-0 ceph-mon[75097]: 5.0 scrub ok
Jan 23 18:37:50 compute-0 ceph-mon[75097]: 10.0 scrub starts
Jan 23 18:37:50 compute-0 ceph-mon[75097]: 10.0 scrub ok
Jan 23 18:37:50 compute-0 ceph-mon[75097]: pgmap v180: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 0 objects/s recovering
Jan 23 18:37:50 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 23 18:37:50 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 23 18:37:50 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:37:50 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:37:51 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 23 18:37:51 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.19( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.507528305s) [0] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.686996460s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.17( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.507519722s) [0] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.687026978s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.17( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.507476807s) [0] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.687026978s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.15( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.507402420s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.687011719s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.15( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.507388115s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.687011719s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.14( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.507621765s) [0] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.687377930s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.14( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.507581711s) [0] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.687377930s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.19( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.507206917s) [0] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.686996460s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.11( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.507256508s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.687454224s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.11( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.507237434s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.687454224s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 90 pg[11.15( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 90 pg[11.17( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [0] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.f( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.507080078s) [0] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.687530518s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.f( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.507019997s) [0] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.687530518s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.12( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506820679s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.687515259s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.12( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506794930s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.687515259s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.e( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506815910s) [0] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.687637329s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.10( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506563187s) [0] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.687393188s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.e( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506799698s) [0] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.687637329s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 90 pg[11.14( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [0] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 90 pg[11.11( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.10( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506539345s) [0] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.687393188s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.d( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506668091s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.687591553s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.b( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506584167s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.687561035s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.d( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506647110s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.687591553s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.b( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506508827s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.687561035s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.9( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506582260s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.687698364s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.9( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506562233s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.687698364s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 90 pg[11.19( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [0] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.2( v 73'2 (0'0,73'2] local-lis/les=85/86 n=1 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506492615s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.687683105s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.2( v 73'2 (0'0,73'2] local-lis/les=85/86 n=1 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506480217s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.687683105s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.3( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506378174s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.687667847s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.3( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506361961s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.687667847s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.8( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506353378s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.687805176s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.8( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506341934s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.687805176s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.1( v 73'2 (0'0,73'2] local-lis/les=85/86 n=1 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506278992s) [0] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.687850952s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.1( v 73'2 (0'0,73'2] local-lis/les=85/86 n=1 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506262779s) [0] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.687850952s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.4( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506229401s) [0] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.687850952s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.4( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506208420s) [0] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.687850952s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 90 pg[11.f( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [0] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.18( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506274223s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.688110352s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.18( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506254196s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.688110352s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.6( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506059647s) [0] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.687942505s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 90 pg[11.12( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.6( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506042480s) [0] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.687942505s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.1a( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506106377s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.688064575s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.1b( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506025314s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.687973022s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.1a( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506084442s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.688064575s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.1b( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.505982399s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.687973022s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.1c( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.505985260s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.688049316s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.1c( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.505967140s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.688049316s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.1e( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506031036s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.688140869s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.1e( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.506017685s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.688140869s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 90 pg[11.d( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.1f( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.505998611s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 active pruub 154.688140869s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 90 pg[11.e( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [0] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 90 pg[11.1f( v 73'2 (0'0,73'2] local-lis/les=85/86 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90 pruub=9.505985260s) [2] r=-1 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 unknown NOTIFY pruub 154.688140869s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 90 pg[11.10( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [0] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 90 pg[11.1( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [0] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 90 pg[11.b( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 90 pg[11.9( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 90 pg[11.3( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 90 pg[11.4( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [0] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 90 pg[11.2( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 90 pg[11.6( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [0] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 90 pg[11.8( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 90 pg[11.18( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 90 pg[11.1b( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 90 pg[11.1c( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 90 pg[11.1e( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 90 pg[11.1f( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 90 pg[11.1a( empty local-lis/les=0/0 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:51 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 23 18:37:51 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 23 18:37:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 23 18:37:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 23 18:37:51 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 23 18:37:51 compute-0 ceph-mon[75097]: 8.7 scrub starts
Jan 23 18:37:51 compute-0 ceph-mon[75097]: 8.7 scrub ok
Jan 23 18:37:51 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 23 18:37:51 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 18:37:51 compute-0 ceph-mon[75097]: osdmap e90: 3 total, 3 up, 3 in
Jan 23 18:37:51 compute-0 ceph-mon[75097]: 8.10 scrub starts
Jan 23 18:37:51 compute-0 ceph-mon[75097]: 8.10 scrub ok
Jan 23 18:37:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 91 pg[11.17( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [0] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 91 pg[11.f( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [0] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 91 pg[11.1( v 73'2 (0'0,73'2] local-lis/les=90/91 n=1 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [0] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 91 pg[11.e( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [0] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 91 pg[11.19( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [0] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 91 pg[11.14( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [0] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 91 pg[11.4( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [0] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 91 pg[11.10( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [0] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 91 pg[11.6( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [0] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 91 pg[11.11( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 91 pg[11.1a( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 91 pg[11.1b( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 91 pg[11.1f( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 91 pg[11.2( v 73'2 (0'0,73'2] local-lis/les=90/91 n=1 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 91 pg[11.1e( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 91 pg[11.9( v 73'2 lc 0'0 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=73'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 91 pg[11.8( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 91 pg[11.b( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 91 pg[11.1c( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 91 pg[11.3( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 91 pg[11.12( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 91 pg[11.15( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 91 pg[11.d( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 91 pg[11.18( v 73'2 (0'0,73'2] local-lis/les=90/91 n=0 ec=85/38 lis/c=85/85 les/c/f=86/86/0 sis=90) [2] r=0 lpr=90 pi=[85,90)/1 crt=73'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:51 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 44 B/s, 1 objects/s recovering
Jan 23 18:37:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Jan 23 18:37:51 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 23 18:37:52 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 23 18:37:52 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 23 18:37:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 23 18:37:52 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 23 18:37:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 23 18:37:52 compute-0 ceph-mon[75097]: 3.d scrub starts
Jan 23 18:37:52 compute-0 ceph-mon[75097]: 3.d scrub ok
Jan 23 18:37:52 compute-0 ceph-mon[75097]: osdmap e91: 3 total, 3 up, 3 in
Jan 23 18:37:52 compute-0 ceph-mon[75097]: pgmap v183: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 44 B/s, 1 objects/s recovering
Jan 23 18:37:52 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 23 18:37:52 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 92 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=54/55 n=6 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=92 pruub=10.574248314s) [1] r=-1 lpr=92 pi=[54,92)/1 crt=41'483 active pruub 162.330413818s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:52 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 92 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=54/55 n=6 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=92 pruub=10.574196815s) [1] r=-1 lpr=92 pi=[54,92)/1 crt=41'483 unknown NOTIFY pruub 162.330413818s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:52 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 23 18:37:52 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 92 pg[9.15( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=92 pi=[54,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:53 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 23 18:37:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:37:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 23 18:37:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 23 18:37:53 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 23 18:37:53 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 23 18:37:53 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 93 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=54/55 n=6 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=93) [1]/[0] r=0 lpr=93 pi=[54,93)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:53 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 93 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=54/55 n=6 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=93) [1]/[0] r=0 lpr=93 pi=[54,93)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:53 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=93) [1]/[0] r=-1 lpr=93 pi=[54,93)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:53 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=93) [1]/[0] r=-1 lpr=93 pi=[54,93)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:53 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 23 18:37:53 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 23 18:37:53 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:37:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Jan 23 18:37:54 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 23 18:37:54 compute-0 ceph-mon[75097]: 8.5 scrub starts
Jan 23 18:37:54 compute-0 ceph-mon[75097]: 8.5 scrub ok
Jan 23 18:37:54 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 23 18:37:54 compute-0 ceph-mon[75097]: osdmap e92: 3 total, 3 up, 3 in
Jan 23 18:37:54 compute-0 ceph-mon[75097]: osdmap e93: 3 total, 3 up, 3 in
Jan 23 18:37:55 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 23 18:37:55 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 23 18:37:55 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v187: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 78 B/s, 0 objects/s recovering
Jan 23 18:37:56 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 23 18:37:56 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 23 18:37:56 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 23 18:37:56 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 23 18:37:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 23 18:37:57 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.b scrub starts
Jan 23 18:37:57 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.b scrub ok
Jan 23 18:37:57 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 23 18:37:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 23 18:37:57 compute-0 ceph-mon[75097]: 2.1 scrub starts
Jan 23 18:37:57 compute-0 ceph-mon[75097]: 2.1 scrub ok
Jan 23 18:37:57 compute-0 ceph-mon[75097]: 7.b scrub starts
Jan 23 18:37:57 compute-0 ceph-mon[75097]: 7.b scrub ok
Jan 23 18:37:57 compute-0 ceph-mon[75097]: pgmap v186: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:37:57 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 23 18:37:57 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 94 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=64/65 n=6 ec=46/34 lis/c=64/64 les/c/f=65/65/0 sis=94 pruub=14.326008797s) [0] r=-1 lpr=94 pi=[64,94)/1 crt=41'483 active pruub 160.623611450s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:57 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 94 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=64/65 n=6 ec=46/34 lis/c=64/64 les/c/f=65/65/0 sis=94 pruub=14.325000763s) [0] r=-1 lpr=94 pi=[64,94)/1 crt=41'483 unknown NOTIFY pruub 160.623611450s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:57 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=64/64 les/c/f=65/65/0 sis=94) [0] r=0 lpr=94 pi=[64,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:57 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 23 18:37:57 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 66 B/s, 0 objects/s recovering
Jan 23 18:37:58 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 94 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=93/94 n=6 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=93) [1]/[0] async=[1] r=0 lpr=93 pi=[54,93)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:37:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:37:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 23 18:37:58 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 23 18:37:58 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 23 18:37:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 23 18:37:58 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 23 18:37:58 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 95 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=64/65 n=6 ec=46/34 lis/c=64/64 les/c/f=65/65/0 sis=95) [0]/[2] r=0 lpr=95 pi=[64,95)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:58 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 95 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=64/65 n=6 ec=46/34 lis/c=64/64 les/c/f=65/65/0 sis=95) [0]/[2] r=0 lpr=95 pi=[64,95)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:58 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 95 pg[9.16( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=64/64 les/c/f=65/65/0 sis=95) [0]/[2] r=-1 lpr=95 pi=[64,95)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:58 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 95 pg[9.16( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=64/64 les/c/f=65/65/0 sis=95) [0]/[2] r=-1 lpr=95 pi=[64,95)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:58 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 95 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=93/94 n=6 ec=46/34 lis/c=93/54 les/c/f=94/55/0 sis=95 pruub=15.384551048s) [1] async=[1] r=-1 lpr=95 pi=[54,95)/1 crt=41'483 active pruub 172.992675781s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:58 compute-0 ceph-mon[75097]: 7.14 scrub starts
Jan 23 18:37:58 compute-0 ceph-mon[75097]: 7.14 scrub ok
Jan 23 18:37:58 compute-0 ceph-mon[75097]: pgmap v187: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 78 B/s, 0 objects/s recovering
Jan 23 18:37:58 compute-0 ceph-mon[75097]: 7.4 scrub starts
Jan 23 18:37:58 compute-0 ceph-mon[75097]: 7.4 scrub ok
Jan 23 18:37:58 compute-0 ceph-mon[75097]: 5.6 scrub starts
Jan 23 18:37:58 compute-0 ceph-mon[75097]: 5.6 scrub ok
Jan 23 18:37:58 compute-0 ceph-mon[75097]: 8.b scrub starts
Jan 23 18:37:58 compute-0 ceph-mon[75097]: 8.b scrub ok
Jan 23 18:37:58 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 23 18:37:58 compute-0 ceph-mon[75097]: osdmap e94: 3 total, 3 up, 3 in
Jan 23 18:37:58 compute-0 ceph-mon[75097]: pgmap v189: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 66 B/s, 0 objects/s recovering
Jan 23 18:37:58 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 95 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=93/94 n=6 ec=46/34 lis/c=93/54 les/c/f=94/55/0 sis=95 pruub=15.383737564s) [1] r=-1 lpr=95 pi=[54,95)/1 crt=41'483 unknown NOTIFY pruub 172.992675781s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:37:58 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 95 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=93/54 les/c/f=94/55/0 sis=95) [1] r=0 lpr=95 pi=[54,95)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:37:58 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 95 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=93/54 les/c/f=94/55/0 sis=95) [1] r=0 lpr=95 pi=[54,95)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:37:59 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 23 18:37:59 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 23 18:37:59 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 23 18:37:59 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 23 18:37:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 23 18:37:59 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 78 B/s, 0 objects/s recovering
Jan 23 18:38:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:38:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:38:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:38:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:38:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:38:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:38:00 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 23 18:38:00 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 23 18:38:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 23 18:38:00 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 23 18:38:00 compute-0 ceph-mon[75097]: 3.10 scrub starts
Jan 23 18:38:00 compute-0 ceph-mon[75097]: 3.10 scrub ok
Jan 23 18:38:00 compute-0 ceph-mon[75097]: osdmap e95: 3 total, 3 up, 3 in
Jan 23 18:38:00 compute-0 ceph-mon[75097]: 3.f scrub starts
Jan 23 18:38:00 compute-0 ceph-mon[75097]: 3.f scrub ok
Jan 23 18:38:00 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 96 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=95/96 n=6 ec=46/34 lis/c=64/64 les/c/f=65/65/0 sis=95) [0]/[2] async=[0] r=0 lpr=95 pi=[64,95)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:38:00 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 96 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=95/96 n=6 ec=46/34 lis/c=93/54 les/c/f=94/55/0 sis=95) [1] r=0 lpr=95 pi=[54,95)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:38:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 23 18:38:01 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Jan 23 18:38:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 23 18:38:02 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 23 18:38:02 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 97 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=95/64 les/c/f=96/65/0 sis=97) [0] r=0 lpr=97 pi=[64,97)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:02 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 97 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=95/64 les/c/f=96/65/0 sis=97) [0] r=0 lpr=97 pi=[64,97)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:38:02 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 97 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=95/96 n=6 ec=46/34 lis/c=95/64 les/c/f=96/65/0 sis=97 pruub=14.425219536s) [0] async=[0] r=-1 lpr=97 pi=[64,97)/1 crt=41'483 active pruub 165.311157227s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:02 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 97 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=95/96 n=6 ec=46/34 lis/c=95/64 les/c/f=96/65/0 sis=97 pruub=14.425153732s) [0] r=-1 lpr=97 pi=[64,97)/1 crt=41'483 unknown NOTIFY pruub 165.311157227s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:38:02 compute-0 ceph-mon[75097]: 7.16 scrub starts
Jan 23 18:38:02 compute-0 ceph-mon[75097]: 7.16 scrub ok
Jan 23 18:38:02 compute-0 ceph-mon[75097]: pgmap v191: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 78 B/s, 0 objects/s recovering
Jan 23 18:38:02 compute-0 ceph-mon[75097]: 10.a scrub starts
Jan 23 18:38:02 compute-0 ceph-mon[75097]: 10.a scrub ok
Jan 23 18:38:02 compute-0 ceph-mon[75097]: osdmap e96: 3 total, 3 up, 3 in
Jan 23 18:38:02 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 23 18:38:02 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 23 18:38:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 23 18:38:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 23 18:38:03 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 23 18:38:03 compute-0 ceph-mon[75097]: pgmap v193: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Jan 23 18:38:03 compute-0 ceph-mon[75097]: osdmap e97: 3 total, 3 up, 3 in
Jan 23 18:38:03 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 98 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=97/98 n=6 ec=46/34 lis/c=95/64 les/c/f=96/65/0 sis=97) [0] r=0 lpr=97 pi=[64,97)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:38:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:38:03 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 0 objects/s recovering
Jan 23 18:38:03 compute-0 sshd-session[99606]: Accepted publickey for zuul from 192.168.122.30 port 53952 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:38:03 compute-0 systemd-logind[792]: New session 34 of user zuul.
Jan 23 18:38:03 compute-0 systemd[1]: Started Session 34 of User zuul.
Jan 23 18:38:03 compute-0 sshd-session[99606]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:38:04 compute-0 ceph-mon[75097]: 10.c scrub starts
Jan 23 18:38:04 compute-0 ceph-mon[75097]: 10.c scrub ok
Jan 23 18:38:04 compute-0 ceph-mon[75097]: osdmap e98: 3 total, 3 up, 3 in
Jan 23 18:38:04 compute-0 ceph-mon[75097]: pgmap v196: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 0 objects/s recovering
Jan 23 18:38:04 compute-0 python3.9[99759]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 23 18:38:05 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 23 18:38:05 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 23 18:38:05 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Jan 23 18:38:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Jan 23 18:38:06 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 23 18:38:06 compute-0 python3.9[99933]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:38:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 23 18:38:06 compute-0 ceph-mon[75097]: 3.c scrub starts
Jan 23 18:38:06 compute-0 ceph-mon[75097]: 3.c scrub ok
Jan 23 18:38:06 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 23 18:38:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 23 18:38:06 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 23 18:38:06 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 23 18:38:06 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 23 18:38:06 compute-0 sudo[100087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stangqrepoihhkmhhyisfsoqxznpxixd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193486.5225317-40-178055710969061/AnsiballZ_command.py'
Jan 23 18:38:06 compute-0 sudo[100087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:38:07 compute-0 python3.9[100089]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:38:07 compute-0 sudo[100087]: pam_unix(sudo:session): session closed for user root
Jan 23 18:38:07 compute-0 ceph-mon[75097]: pgmap v197: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Jan 23 18:38:07 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 23 18:38:07 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 23 18:38:07 compute-0 ceph-mon[75097]: osdmap e99: 3 total, 3 up, 3 in
Jan 23 18:38:07 compute-0 sudo[100240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zechlzeslonyhuzscxwrabapmkjupakx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193487.3991385-52-270952600694000/AnsiballZ_stat.py'
Jan 23 18:38:07 compute-0 sudo[100240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:38:07 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Jan 23 18:38:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Jan 23 18:38:07 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 23 18:38:08 compute-0 python3.9[100242]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:38:08 compute-0 sudo[100240]: pam_unix(sudo:session): session closed for user root
Jan 23 18:38:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 23 18:38:08 compute-0 ceph-mon[75097]: 5.e scrub starts
Jan 23 18:38:08 compute-0 ceph-mon[75097]: 5.e scrub ok
Jan 23 18:38:08 compute-0 ceph-mon[75097]: pgmap v199: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Jan 23 18:38:08 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 23 18:38:08 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 23 18:38:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 23 18:38:08 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 23 18:38:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:38:08 compute-0 sudo[100394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obakoxaxbrmfcpyyjtsyaycfbskbvpos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193488.307201-63-40379223956569/AnsiballZ_file.py'
Jan 23 18:38:08 compute-0 sudo[100394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:38:08 compute-0 python3.9[100396]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:38:08 compute-0 sudo[100394]: pam_unix(sudo:session): session closed for user root
Jan 23 18:38:09 compute-0 sudo[100546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybfxwtyfilcrhlniosibcsbulsacvjeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193489.1055384-72-25240484685766/AnsiballZ_file.py'
Jan 23 18:38:09 compute-0 sudo[100546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:38:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 23 18:38:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 23 18:38:09 compute-0 python3.9[100548]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:38:09 compute-0 sudo[100546]: pam_unix(sudo:session): session closed for user root
Jan 23 18:38:09 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 0 objects/s recovering
Jan 23 18:38:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Jan 23 18:38:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 23 18:38:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 23 18:38:10 compute-0 ceph-mon[75097]: osdmap e100: 3 total, 3 up, 3 in
Jan 23 18:38:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 23 18:38:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 23 18:38:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 23 18:38:10 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 23 18:38:10 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 101 pg[9.19( v 73'487 (0'0,73'487] local-lis/les=54/55 n=6 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=101 pruub=9.270836830s) [2] r=-1 lpr=101 pi=[54,101)/1 crt=73'486 lcod 73'486 active pruub 178.330734253s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:10 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 101 pg[9.19( v 73'487 (0'0,73'487] local-lis/les=54/55 n=6 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=101 pruub=9.270219803s) [2] r=-1 lpr=101 pi=[54,101)/1 crt=73'486 lcod 73'486 unknown NOTIFY pruub 178.330734253s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:38:10 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 101 pg[9.19( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=101) [2] r=0 lpr=101 pi=[54,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:38:10 compute-0 python3.9[100698]: ansible-ansible.builtin.service_facts Invoked
Jan 23 18:38:10 compute-0 network[100715]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 18:38:10 compute-0 network[100716]: 'network-scripts' will be removed from distribution in near future.
Jan 23 18:38:10 compute-0 network[100717]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 18:38:11 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 23 18:38:11 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 23 18:38:11 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v203: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:38:11 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 23 18:38:11 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 23 18:38:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 23 18:38:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 23 18:38:12 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 23 18:38:12 compute-0 ceph-mon[75097]: 5.d scrub starts
Jan 23 18:38:12 compute-0 ceph-mon[75097]: 5.d scrub ok
Jan 23 18:38:12 compute-0 ceph-mon[75097]: pgmap v201: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 0 objects/s recovering
Jan 23 18:38:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 23 18:38:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 23 18:38:12 compute-0 ceph-mon[75097]: osdmap e101: 3 total, 3 up, 3 in
Jan 23 18:38:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 23 18:38:13 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 23 18:38:13 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 102 pg[9.19( v 73'487 (0'0,73'487] local-lis/les=54/55 n=6 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=102) [2]/[0] r=0 lpr=102 pi=[54,102)/1 crt=73'486 lcod 73'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:13 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 102 pg[9.19( v 73'487 (0'0,73'487] local-lis/les=54/55 n=6 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=102) [2]/[0] r=0 lpr=102 pi=[54,102)/1 crt=73'486 lcod 73'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:38:13 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=102) [2]/[0] r=-1 lpr=102 pi=[54,102)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:13 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=102) [2]/[0] r=-1 lpr=102 pi=[54,102)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:38:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:38:13 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:38:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 23 18:38:13 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 23 18:38:13 compute-0 ceph-mon[75097]: 8.19 scrub starts
Jan 23 18:38:13 compute-0 ceph-mon[75097]: 8.19 scrub ok
Jan 23 18:38:13 compute-0 ceph-mon[75097]: pgmap v203: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:38:13 compute-0 ceph-mon[75097]: 3.1 scrub starts
Jan 23 18:38:13 compute-0 ceph-mon[75097]: 3.1 scrub ok
Jan 23 18:38:13 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 23 18:38:13 compute-0 ceph-mon[75097]: osdmap e102: 3 total, 3 up, 3 in
Jan 23 18:38:13 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 23 18:38:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 23 18:38:14 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 23 18:38:14 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 23 18:38:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 23 18:38:14 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 23 18:38:14 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 103 pg[9.19( v 73'487 (0'0,73'487] local-lis/les=102/103 n=6 ec=46/34 lis/c=54/54 les/c/f=55/55/0 sis=102) [2]/[0] async=[2] r=0 lpr=102 pi=[54,102)/1 crt=73'487 lcod 73'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:38:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 23 18:38:15 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Jan 23 18:38:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 23 18:38:15 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 23 18:38:15 compute-0 ceph-mon[75097]: pgmap v205: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:38:15 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 23 18:38:15 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 23 18:38:15 compute-0 ceph-mon[75097]: osdmap e103: 3 total, 3 up, 3 in
Jan 23 18:38:17 compute-0 python3.9[100977]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:38:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 23 18:38:17 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 23 18:38:17 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 104 pg[9.19( v 73'487 (0'0,73'487] local-lis/les=102/103 n=6 ec=46/34 lis/c=102/54 les/c/f=103/55/0 sis=104 pruub=12.707571030s) [2] async=[2] r=-1 lpr=104 pi=[54,104)/1 crt=73'487 lcod 73'486 active pruub 189.030487061s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:17 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 104 pg[9.19( v 73'487 (0'0,73'487] local-lis/les=102/103 n=6 ec=46/34 lis/c=102/54 les/c/f=103/55/0 sis=104 pruub=12.707442284s) [2] r=-1 lpr=104 pi=[54,104)/1 crt=73'487 lcod 73'486 unknown NOTIFY pruub 189.030487061s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:38:17 compute-0 ceph-mon[75097]: pgmap v207: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Jan 23 18:38:17 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 23 18:38:17 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 104 pg[9.19( v 73'487 (0'0,73'487] local-lis/les=0/0 n=6 ec=46/34 lis/c=102/54 les/c/f=103/55/0 sis=104) [2] r=0 lpr=104 pi=[54,104)/1 pct=0'0 crt=73'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:17 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 104 pg[9.19( v 73'487 (0'0,73'487] local-lis/les=0/0 n=6 ec=46/34 lis/c=102/54 les/c/f=103/55/0 sis=104) [2] r=0 lpr=104 pi=[54,104)/1 crt=73'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:38:17 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Jan 23 18:38:18 compute-0 python3.9[101127]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:38:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 23 18:38:18 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 23 18:38:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 23 18:38:18 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 23 18:38:18 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 23 18:38:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 23 18:38:18 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 23 18:38:18 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 105 pg[9.19( v 73'487 (0'0,73'487] local-lis/les=104/105 n=6 ec=46/34 lis/c=102/54 les/c/f=103/55/0 sis=104) [2] r=0 lpr=104 pi=[54,104)/1 crt=73'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:38:18 compute-0 ceph-mon[75097]: osdmap e104: 3 total, 3 up, 3 in
Jan 23 18:38:18 compute-0 ceph-mon[75097]: pgmap v209: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Jan 23 18:38:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 23 18:38:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:38:19 compute-0 python3.9[101281]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:38:19 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 23 18:38:19 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 23 18:38:19 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 23 18:38:19 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 23 18:38:19 compute-0 ceph-mon[75097]: osdmap e105: 3 total, 3 up, 3 in
Jan 23 18:38:19 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Jan 23 18:38:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Jan 23 18:38:19 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 23 18:38:19 compute-0 sudo[101437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brrurpbeucppzhuqyvvoduydjmxjxbtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193499.6630473-120-136314326027749/AnsiballZ_setup.py'
Jan 23 18:38:19 compute-0 sudo[101437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:38:20 compute-0 python3.9[101439]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:38:20 compute-0 sudo[101437]: pam_unix(sudo:session): session closed for user root
Jan 23 18:38:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 23 18:38:20 compute-0 ceph-mon[75097]: 5.1c scrub starts
Jan 23 18:38:20 compute-0 ceph-mon[75097]: 5.1c scrub ok
Jan 23 18:38:20 compute-0 ceph-mon[75097]: pgmap v211: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Jan 23 18:38:20 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 23 18:38:20 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 23 18:38:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 23 18:38:20 compute-0 sudo[101521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtkfjtbatktdgtazpqtdotkwgytuqfwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193499.6630473-120-136314326027749/AnsiballZ_dnf.py'
Jan 23 18:38:20 compute-0 sudo[101521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:38:20 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 23 18:38:21 compute-0 python3.9[101523]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:38:21 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 23 18:38:21 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 23 18:38:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 23 18:38:21 compute-0 ceph-mon[75097]: osdmap e106: 3 total, 3 up, 3 in
Jan 23 18:38:21 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:38:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Jan 23 18:38:21 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 23 18:38:21 compute-0 irqbalance[786]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 23 18:38:21 compute-0 irqbalance[786]: IRQ 26 affinity is now unmanaged
Jan 23 18:38:22 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 23 18:38:22 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 23 18:38:22 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 106 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=79/80 n=6 ec=46/34 lis/c=79/79 les/c/f=80/80/0 sis=106 pruub=11.369832039s) [0] r=-1 lpr=106 pi=[79,106)/1 crt=73'487 active pruub 182.300140381s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:22 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 106 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=79/80 n=6 ec=46/34 lis/c=79/79 les/c/f=80/80/0 sis=106 pruub=11.369797707s) [0] r=-1 lpr=106 pi=[79,106)/1 crt=73'487 unknown NOTIFY pruub 182.300140381s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:38:22 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=79/79 les/c/f=80/80/0 sis=106) [0] r=0 lpr=106 pi=[79,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:38:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 23 18:38:23 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 23 18:38:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 23 18:38:23 compute-0 ceph-mon[75097]: 3.13 scrub starts
Jan 23 18:38:23 compute-0 ceph-mon[75097]: 3.13 scrub ok
Jan 23 18:38:23 compute-0 ceph-mon[75097]: pgmap v213: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:38:23 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 23 18:38:23 compute-0 ceph-mon[75097]: 5.7 scrub starts
Jan 23 18:38:23 compute-0 ceph-mon[75097]: 5.7 scrub ok
Jan 23 18:38:23 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 23 18:38:23 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 107 pg[9.1c( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=79/79 les/c/f=80/80/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[79,107)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:23 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 107 pg[9.1c( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=79/79 les/c/f=80/80/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[79,107)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:38:23 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 107 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=79/80 n=6 ec=46/34 lis/c=79/79 les/c/f=80/80/0 sis=107) [0]/[2] r=0 lpr=107 pi=[79,107)/1 crt=73'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:23 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 107 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=79/80 n=6 ec=46/34 lis/c=79/79 les/c/f=80/80/0 sis=107) [0]/[2] r=0 lpr=107 pi=[79,107)/1 crt=73'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:38:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:38:23 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:38:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Jan 23 18:38:23 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 23 18:38:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 23 18:38:24 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 23 18:38:24 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 23 18:38:24 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 23 18:38:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 23 18:38:24 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 23 18:38:24 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 108 pg[9.1e( v 73'485 (0'0,73'485] local-lis/les=64/65 n=6 ec=46/34 lis/c=64/64 les/c/f=65/65/0 sis=108 pruub=11.322209358s) [0] r=-1 lpr=108 pi=[64,108)/1 crt=72'484 lcod 72'484 active pruub 184.627166748s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:24 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 108 pg[9.1e( v 73'485 (0'0,73'485] local-lis/les=64/65 n=6 ec=46/34 lis/c=64/64 les/c/f=65/65/0 sis=108 pruub=11.322113037s) [0] r=-1 lpr=108 pi=[64,108)/1 crt=72'484 lcod 72'484 unknown NOTIFY pruub 184.627166748s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:38:24 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=64/64 les/c/f=65/65/0 sis=108) [0] r=0 lpr=108 pi=[64,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:38:24 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 23 18:38:24 compute-0 ceph-mon[75097]: osdmap e107: 3 total, 3 up, 3 in
Jan 23 18:38:24 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 23 18:38:24 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 108 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=107/108 n=6 ec=46/34 lis/c=79/79 les/c/f=80/80/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[79,107)/1 crt=73'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:38:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 23 18:38:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 23 18:38:25 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:38:25 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 23 18:38:26 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=64/64 les/c/f=65/65/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[64,109)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:26 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=64/64 les/c/f=65/65/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[64,109)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:38:26 compute-0 ceph-mon[75097]: pgmap v215: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:38:26 compute-0 ceph-mon[75097]: 7.18 scrub starts
Jan 23 18:38:26 compute-0 ceph-mon[75097]: 7.18 scrub ok
Jan 23 18:38:26 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 23 18:38:26 compute-0 ceph-mon[75097]: osdmap e108: 3 total, 3 up, 3 in
Jan 23 18:38:26 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 109 pg[9.1e( v 73'485 (0'0,73'485] local-lis/les=64/65 n=6 ec=46/34 lis/c=64/64 les/c/f=65/65/0 sis=109) [0]/[2] r=0 lpr=109 pi=[64,109)/1 crt=72'484 lcod 72'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:26 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 109 pg[9.1e( v 73'485 (0'0,73'485] local-lis/les=64/65 n=6 ec=46/34 lis/c=64/64 les/c/f=65/65/0 sis=109) [0]/[2] r=0 lpr=109 pi=[64,109)/1 crt=72'484 lcod 72'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:38:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 23 18:38:27 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:38:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 23 18:38:28 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 23 18:38:28 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 110 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=0/0 n=6 ec=46/34 lis/c=107/79 les/c/f=108/80/0 sis=110) [0] r=0 lpr=110 pi=[79,110)/1 pct=0'0 crt=73'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:28 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 110 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=0/0 n=6 ec=46/34 lis/c=107/79 les/c/f=108/80/0 sis=110) [0] r=0 lpr=110 pi=[79,110)/1 crt=73'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:38:28 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 110 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=107/108 n=6 ec=46/34 lis/c=107/79 les/c/f=108/80/0 sis=110 pruub=11.995724678s) [0] async=[0] r=-1 lpr=110 pi=[79,110)/1 crt=73'487 active pruub 189.314895630s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:28 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 110 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=107/108 n=6 ec=46/34 lis/c=107/79 les/c/f=108/80/0 sis=110 pruub=11.995543480s) [0] r=-1 lpr=110 pi=[79,110)/1 crt=73'487 unknown NOTIFY pruub 189.314895630s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:38:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:38:28 compute-0 ceph-mon[75097]: pgmap v217: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:38:28 compute-0 ceph-mon[75097]: osdmap e109: 3 total, 3 up, 3 in
Jan 23 18:38:28 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 110 pg[9.1e( v 73'485 (0'0,73'485] local-lis/les=109/110 n=6 ec=46/34 lis/c=64/64 les/c/f=65/65/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[64,109)/1 crt=73'485 lcod 72'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:38:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:38:28
Jan 23 18:38:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:38:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Some PGs (0.003279) are unknown; try again later
Jan 23 18:38:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 23 18:38:29 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 1 active+recovering+remapped, 1 peering, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4/251 objects misplaced (1.594%); 94 B/s, 1 objects/s recovering
Jan 23 18:38:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:38:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:38:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:38:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:38:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:38:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:38:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 23 18:38:30 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 23 18:38:30 compute-0 ceph-mon[75097]: pgmap v219: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:38:30 compute-0 ceph-mon[75097]: osdmap e110: 3 total, 3 up, 3 in
Jan 23 18:38:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:38:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:38:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:38:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:38:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:38:30 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 111 pg[9.1c( v 73'487 (0'0,73'487] local-lis/les=110/111 n=6 ec=46/34 lis/c=107/79 les/c/f=108/80/0 sis=110) [0] r=0 lpr=110 pi=[79,110)/1 crt=73'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:38:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:38:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:38:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:38:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:38:30 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 23 18:38:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:38:30 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 23 18:38:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 23 18:38:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 23 18:38:31 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 23 18:38:31 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 112 pg[9.1e( v 73'485 (0'0,73'485] local-lis/les=109/110 n=6 ec=46/34 lis/c=109/64 les/c/f=110/65/0 sis=112 pruub=12.888080597s) [0] async=[0] r=-1 lpr=112 pi=[64,112)/1 crt=73'485 lcod 72'484 active pruub 193.469558716s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:31 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 112 pg[9.1e( v 73'485 (0'0,73'485] local-lis/les=109/110 n=6 ec=46/34 lis/c=109/64 les/c/f=110/65/0 sis=112 pruub=12.887829781s) [0] r=-1 lpr=112 pi=[64,112)/1 crt=73'485 lcod 72'484 unknown NOTIFY pruub 193.469558716s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:38:31 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 1 active+recovering+remapped, 1 peering, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 341 B/s wr, 8 op/s; 4/251 objects misplaced (1.594%); 120 B/s, 2 objects/s recovering
Jan 23 18:38:31 compute-0 ceph-mon[75097]: pgmap v221: 305 pgs: 1 active+recovering+remapped, 1 peering, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4/251 objects misplaced (1.594%); 94 B/s, 1 objects/s recovering
Jan 23 18:38:31 compute-0 ceph-mon[75097]: osdmap e111: 3 total, 3 up, 3 in
Jan 23 18:38:31 compute-0 ceph-mon[75097]: 7.17 scrub starts
Jan 23 18:38:31 compute-0 ceph-mon[75097]: 7.17 scrub ok
Jan 23 18:38:31 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 112 pg[9.1e( v 73'485 (0'0,73'485] local-lis/les=0/0 n=6 ec=46/34 lis/c=109/64 les/c/f=110/65/0 sis=112) [0] r=0 lpr=112 pi=[64,112)/1 pct=0'0 crt=73'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:31 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 112 pg[9.1e( v 73'485 (0'0,73'485] local-lis/les=0/0 n=6 ec=46/34 lis/c=109/64 les/c/f=110/65/0 sis=112) [0] r=0 lpr=112 pi=[64,112)/1 crt=73'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:38:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 23 18:38:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 23 18:38:33 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 23 18:38:33 compute-0 ceph-mon[75097]: osdmap e112: 3 total, 3 up, 3 in
Jan 23 18:38:33 compute-0 ceph-mon[75097]: pgmap v224: 305 pgs: 1 active+recovering+remapped, 1 peering, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 341 B/s wr, 8 op/s; 4/251 objects misplaced (1.594%); 120 B/s, 2 objects/s recovering
Jan 23 18:38:33 compute-0 ceph-osd[85953]: osd.0 pg_epoch: 113 pg[9.1e( v 73'485 (0'0,73'485] local-lis/les=112/113 n=6 ec=46/34 lis/c=109/64 les/c/f=110/65/0 sis=112) [0] r=0 lpr=112 pi=[64,112)/1 crt=73'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:38:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:38:33 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 1 active+recovering+remapped, 1 peering, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 355 B/s wr, 9 op/s; 4/251 objects misplaced (1.594%); 125 B/s, 2 objects/s recovering
Jan 23 18:38:34 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 23 18:38:34 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 23 18:38:35 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 23 18:38:36 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 23 18:38:36 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 23 18:38:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 18:38:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 23 18:38:36 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 23 18:38:36 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 23 18:38:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 23 18:38:36 compute-0 ceph-mon[75097]: osdmap e113: 3 total, 3 up, 3 in
Jan 23 18:38:36 compute-0 ceph-mon[75097]: pgmap v226: 305 pgs: 1 active+recovering+remapped, 1 peering, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 355 B/s wr, 9 op/s; 4/251 objects misplaced (1.594%); 125 B/s, 2 objects/s recovering
Jan 23 18:38:37 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 23 18:38:37 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 23 18:38:37 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 18:38:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 23 18:38:37 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 23 18:38:37 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 23 18:38:37 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 23 18:38:38 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 23 18:38:38 compute-0 ceph-mon[75097]: 5.1b scrub starts
Jan 23 18:38:38 compute-0 ceph-mon[75097]: 5.1b scrub ok
Jan 23 18:38:38 compute-0 ceph-mon[75097]: pgmap v227: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 23 18:38:38 compute-0 ceph-mon[75097]: 4.11 scrub starts
Jan 23 18:38:38 compute-0 ceph-mon[75097]: 7.10 scrub starts
Jan 23 18:38:38 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 23 18:38:38 compute-0 ceph-mon[75097]: 4.11 scrub ok
Jan 23 18:38:38 compute-0 ceph-mon[75097]: 7.10 scrub ok
Jan 23 18:38:38 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 18:38:38 compute-0 ceph-mon[75097]: osdmap e114: 3 total, 3 up, 3 in
Jan 23 18:38:38 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 23 18:38:38 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 23 18:38:38 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 23 18:38:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:38:38 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 23 18:38:39 compute-0 ceph-mon[75097]: 4.a scrub starts
Jan 23 18:38:39 compute-0 ceph-mon[75097]: 4.a scrub ok
Jan 23 18:38:39 compute-0 ceph-mon[75097]: pgmap v229: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 23 18:38:39 compute-0 ceph-mon[75097]: 7.9 scrub starts
Jan 23 18:38:39 compute-0 ceph-mon[75097]: 7.9 scrub ok
Jan 23 18:38:39 compute-0 ceph-mon[75097]: 3.14 scrub starts
Jan 23 18:38:39 compute-0 ceph-mon[75097]: 3.14 scrub ok
Jan 23 18:38:39 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 23 18:38:39 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 23 18:38:39 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 23 18:38:40 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 23 18:38:40 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 114 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=65/66 n=6 ec=46/34 lis/c=65/65 les/c/f=66/66/0 sis=114 pruub=13.006076813s) [1] r=-1 lpr=114 pi=[65,114)/1 crt=41'483 active pruub 201.722869873s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:40 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 114 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=65/66 n=6 ec=46/34 lis/c=65/65 les/c/f=66/66/0 sis=114 pruub=13.006037712s) [1] r=-1 lpr=114 pi=[65,114)/1 crt=41'483 unknown NOTIFY pruub 201.722869873s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:38:40 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=65/65 les/c/f=66/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:38:40 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.4717472148700764e-06 of space, bias 4.0, pg target 0.002966096657844092 quantized to 16 (current 16)
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:38:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:38:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 23 18:38:40 compute-0 ceph-mon[75097]: 4.13 scrub starts
Jan 23 18:38:40 compute-0 ceph-mon[75097]: 4.13 scrub ok
Jan 23 18:38:40 compute-0 ceph-mon[75097]: 7.12 scrub starts
Jan 23 18:38:40 compute-0 ceph-mon[75097]: 7.12 scrub ok
Jan 23 18:38:40 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 23 18:38:40 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 23 18:38:40 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 23 18:38:40 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 23 18:38:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 23 18:38:41 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 23 18:38:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 115 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=65/66 n=6 ec=46/34 lis/c=65/65 les/c/f=66/66/0 sis=115) [1]/[2] r=0 lpr=115 pi=[65,115)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:41 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 115 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=65/66 n=6 ec=46/34 lis/c=65/65 les/c/f=66/66/0 sis=115) [1]/[2] r=0 lpr=115 pi=[65,115)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 18:38:41 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=65/65 les/c/f=66/66/0 sis=115) [1]/[2] r=-1 lpr=115 pi=[65,115)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:41 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/34 lis/c=65/65 les/c/f=66/66/0 sis=115) [1]/[2] r=-1 lpr=115 pi=[65,115)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 18:38:41 compute-0 ceph-mon[75097]: pgmap v230: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 23 18:38:41 compute-0 ceph-mon[75097]: 5.4 scrub starts
Jan 23 18:38:41 compute-0 ceph-mon[75097]: 5.4 scrub ok
Jan 23 18:38:41 compute-0 ceph-mon[75097]: 4.1 scrub starts
Jan 23 18:38:41 compute-0 ceph-mon[75097]: 4.1 scrub ok
Jan 23 18:38:41 compute-0 ceph-mon[75097]: 8.1e scrub starts
Jan 23 18:38:41 compute-0 ceph-mon[75097]: 8.1e scrub ok
Jan 23 18:38:41 compute-0 ceph-mon[75097]: osdmap e115: 3 total, 3 up, 3 in
Jan 23 18:38:41 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 23 18:38:41 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 23 18:38:41 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 23 18:38:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 23 18:38:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 23 18:38:42 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 23 18:38:42 compute-0 ceph-mon[75097]: 4.10 scrub starts
Jan 23 18:38:42 compute-0 ceph-mon[75097]: 4.10 scrub ok
Jan 23 18:38:42 compute-0 ceph-mon[75097]: pgmap v232: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 23 18:38:42 compute-0 ceph-mon[75097]: osdmap e116: 3 total, 3 up, 3 in
Jan 23 18:38:42 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 116 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=115/116 n=6 ec=46/34 lis/c=65/65 les/c/f=66/66/0 sis=115) [1]/[2] async=[1] r=0 lpr=115 pi=[65,115)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:38:43 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 23 18:38:43 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 23 18:38:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 23 18:38:43 compute-0 ceph-mon[75097]: 10.8 scrub starts
Jan 23 18:38:43 compute-0 ceph-mon[75097]: 10.8 scrub ok
Jan 23 18:38:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 23 18:38:43 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 23 18:38:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:38:43 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 117 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=115/116 n=6 ec=46/34 lis/c=115/65 les/c/f=116/66/0 sis=117 pruub=14.899632454s) [1] async=[1] r=-1 lpr=117 pi=[65,117)/1 crt=41'483 active pruub 207.452667236s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:43 compute-0 ceph-osd[88072]: osd.2 pg_epoch: 117 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=115/116 n=6 ec=46/34 lis/c=115/65 les/c/f=116/66/0 sis=117 pruub=14.899553299s) [1] r=-1 lpr=117 pi=[65,117)/1 crt=41'483 unknown NOTIFY pruub 207.452667236s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 18:38:43 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:38:43 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 117 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=115/65 les/c/f=116/66/0 sis=117) [1] r=0 lpr=117 pi=[65,117)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 23 18:38:43 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 117 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=46/34 lis/c=115/65 les/c/f=116/66/0 sis=117) [1] r=0 lpr=117 pi=[65,117)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 18:38:44 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 23 18:38:44 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 23 18:38:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 23 18:38:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 23 18:38:44 compute-0 ceph-mon[75097]: osdmap e117: 3 total, 3 up, 3 in
Jan 23 18:38:44 compute-0 ceph-mon[75097]: pgmap v235: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:38:44 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 23 18:38:44 compute-0 ceph-osd[87009]: osd.1 pg_epoch: 118 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=117/118 n=6 ec=46/34 lis/c=115/65 les/c/f=116/66/0 sis=117) [1] r=0 lpr=117 pi=[65,117)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 18:38:45 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Jan 23 18:38:45 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Jan 23 18:38:45 compute-0 ceph-mon[75097]: 4.14 scrub starts
Jan 23 18:38:45 compute-0 ceph-mon[75097]: 4.14 scrub ok
Jan 23 18:38:45 compute-0 ceph-mon[75097]: osdmap e118: 3 total, 3 up, 3 in
Jan 23 18:38:45 compute-0 ceph-mon[75097]: 10.4 scrub starts
Jan 23 18:38:45 compute-0 ceph-mon[75097]: 10.4 scrub ok
Jan 23 18:38:45 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:38:46 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Jan 23 18:38:46 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Jan 23 18:38:46 compute-0 ceph-mon[75097]: pgmap v237: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:38:46 compute-0 ceph-mon[75097]: 8.9 scrub starts
Jan 23 18:38:46 compute-0 ceph-mon[75097]: 8.9 scrub ok
Jan 23 18:38:47 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 23 18:38:47 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 23 18:38:47 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:38:47 compute-0 ceph-mon[75097]: 4.12 scrub starts
Jan 23 18:38:47 compute-0 ceph-mon[75097]: 4.12 scrub ok
Jan 23 18:38:48 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 23 18:38:48 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 23 18:38:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:38:48 compute-0 ceph-mon[75097]: pgmap v238: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:38:48 compute-0 ceph-mon[75097]: 7.6 scrub starts
Jan 23 18:38:48 compute-0 ceph-mon[75097]: 7.6 scrub ok
Jan 23 18:38:49 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 23 18:38:49 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 23 18:38:49 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Jan 23 18:38:50 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 23 18:38:50 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 23 18:38:50 compute-0 sudo[101601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:38:50 compute-0 sudo[101601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:38:50 compute-0 sudo[101601]: pam_unix(sudo:session): session closed for user root
Jan 23 18:38:50 compute-0 sudo[101626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:38:50 compute-0 sudo[101626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:38:50 compute-0 ceph-mon[75097]: 3.3 scrub starts
Jan 23 18:38:50 compute-0 ceph-mon[75097]: 3.3 scrub ok
Jan 23 18:38:50 compute-0 sudo[101626]: pam_unix(sudo:session): session closed for user root
Jan 23 18:38:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:38:50 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:38:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:38:50 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:38:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:38:50 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:38:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:38:50 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:38:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:38:50 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:38:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:38:50 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:38:50 compute-0 sudo[101682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:38:50 compute-0 sudo[101682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:38:50 compute-0 sudo[101682]: pam_unix(sudo:session): session closed for user root
Jan 23 18:38:50 compute-0 sudo[101707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:38:50 compute-0 sudo[101707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:38:51 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 23 18:38:51 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 23 18:38:51 compute-0 podman[101744]: 2026-01-23 18:38:51.183448288 +0000 UTC m=+0.042691914 container create f9da07f0edcf82d4afc779855c415ed54b39f223aee5972a89b9e80e78774149 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_snyder, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 18:38:51 compute-0 systemd[1]: Started libpod-conmon-f9da07f0edcf82d4afc779855c415ed54b39f223aee5972a89b9e80e78774149.scope.
Jan 23 18:38:51 compute-0 ceph-mon[75097]: pgmap v239: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Jan 23 18:38:51 compute-0 ceph-mon[75097]: 2.f scrub starts
Jan 23 18:38:51 compute-0 ceph-mon[75097]: 2.f scrub ok
Jan 23 18:38:51 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:38:51 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:38:51 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:38:51 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:38:51 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:38:51 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:38:51 compute-0 ceph-mon[75097]: 5.5 scrub starts
Jan 23 18:38:51 compute-0 ceph-mon[75097]: 5.5 scrub ok
Jan 23 18:38:51 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:38:51 compute-0 podman[101744]: 2026-01-23 18:38:51.164435464 +0000 UTC m=+0.023679100 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:38:51 compute-0 podman[101744]: 2026-01-23 18:38:51.261210467 +0000 UTC m=+0.120454113 container init f9da07f0edcf82d4afc779855c415ed54b39f223aee5972a89b9e80e78774149 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:38:51 compute-0 podman[101744]: 2026-01-23 18:38:51.267440951 +0000 UTC m=+0.126684577 container start f9da07f0edcf82d4afc779855c415ed54b39f223aee5972a89b9e80e78774149 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Jan 23 18:38:51 compute-0 podman[101744]: 2026-01-23 18:38:51.271714798 +0000 UTC m=+0.130958514 container attach f9da07f0edcf82d4afc779855c415ed54b39f223aee5972a89b9e80e78774149 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_snyder, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:38:51 compute-0 systemd[1]: libpod-f9da07f0edcf82d4afc779855c415ed54b39f223aee5972a89b9e80e78774149.scope: Deactivated successfully.
Jan 23 18:38:51 compute-0 funny_snyder[101762]: 167 167
Jan 23 18:38:51 compute-0 conmon[101762]: conmon f9da07f0edcf82d4afc7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f9da07f0edcf82d4afc779855c415ed54b39f223aee5972a89b9e80e78774149.scope/container/memory.events
Jan 23 18:38:51 compute-0 podman[101744]: 2026-01-23 18:38:51.274317333 +0000 UTC m=+0.133560959 container died f9da07f0edcf82d4afc779855c415ed54b39f223aee5972a89b9e80e78774149 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Jan 23 18:38:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0fdf6a3fb4b9ae83e2fbd68fece347c41b290fc5660cde3faf4ae155086d352-merged.mount: Deactivated successfully.
Jan 23 18:38:51 compute-0 podman[101744]: 2026-01-23 18:38:51.32239193 +0000 UTC m=+0.181635566 container remove f9da07f0edcf82d4afc779855c415ed54b39f223aee5972a89b9e80e78774149 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 18:38:51 compute-0 systemd[1]: libpod-conmon-f9da07f0edcf82d4afc779855c415ed54b39f223aee5972a89b9e80e78774149.scope: Deactivated successfully.
Jan 23 18:38:51 compute-0 podman[101787]: 2026-01-23 18:38:51.471745542 +0000 UTC m=+0.042283684 container create f55ffcdf171147517e216301edd8a2a8ee5e2ae3557ede5ee1ea2d0c67b20934 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wilson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 18:38:51 compute-0 systemd[1]: Started libpod-conmon-f55ffcdf171147517e216301edd8a2a8ee5e2ae3557ede5ee1ea2d0c67b20934.scope.
Jan 23 18:38:51 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3aee206d862451d3354590178064abdf50dea07f0d3c89013766023c9c9e6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3aee206d862451d3354590178064abdf50dea07f0d3c89013766023c9c9e6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3aee206d862451d3354590178064abdf50dea07f0d3c89013766023c9c9e6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3aee206d862451d3354590178064abdf50dea07f0d3c89013766023c9c9e6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3aee206d862451d3354590178064abdf50dea07f0d3c89013766023c9c9e6e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:38:51 compute-0 podman[101787]: 2026-01-23 18:38:51.557783626 +0000 UTC m=+0.128321808 container init f55ffcdf171147517e216301edd8a2a8ee5e2ae3557ede5ee1ea2d0c67b20934 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 23 18:38:51 compute-0 podman[101787]: 2026-01-23 18:38:51.454921733 +0000 UTC m=+0.025459895 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:38:51 compute-0 podman[101787]: 2026-01-23 18:38:51.563911639 +0000 UTC m=+0.134449781 container start f55ffcdf171147517e216301edd8a2a8ee5e2ae3557ede5ee1ea2d0c67b20934 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:38:51 compute-0 podman[101787]: 2026-01-23 18:38:51.566948024 +0000 UTC m=+0.137486196 container attach f55ffcdf171147517e216301edd8a2a8ee5e2ae3557ede5ee1ea2d0c67b20934 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Jan 23 18:38:51 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Jan 23 18:38:51 compute-0 sleepy_wilson[101804]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:38:51 compute-0 sleepy_wilson[101804]: --> All data devices are unavailable
Jan 23 18:38:52 compute-0 systemd[1]: libpod-f55ffcdf171147517e216301edd8a2a8ee5e2ae3557ede5ee1ea2d0c67b20934.scope: Deactivated successfully.
Jan 23 18:38:52 compute-0 podman[101787]: 2026-01-23 18:38:52.029905041 +0000 UTC m=+0.600443183 container died f55ffcdf171147517e216301edd8a2a8ee5e2ae3557ede5ee1ea2d0c67b20934 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wilson, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:38:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c3aee206d862451d3354590178064abdf50dea07f0d3c89013766023c9c9e6e-merged.mount: Deactivated successfully.
Jan 23 18:38:52 compute-0 podman[101787]: 2026-01-23 18:38:52.082977252 +0000 UTC m=+0.653515404 container remove f55ffcdf171147517e216301edd8a2a8ee5e2ae3557ede5ee1ea2d0c67b20934 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:38:52 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 23 18:38:52 compute-0 systemd[1]: libpod-conmon-f55ffcdf171147517e216301edd8a2a8ee5e2ae3557ede5ee1ea2d0c67b20934.scope: Deactivated successfully.
Jan 23 18:38:52 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 23 18:38:52 compute-0 sudo[101707]: pam_unix(sudo:session): session closed for user root
Jan 23 18:38:52 compute-0 sudo[101840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:38:52 compute-0 sudo[101840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:38:52 compute-0 sudo[101840]: pam_unix(sudo:session): session closed for user root
Jan 23 18:38:52 compute-0 sudo[101867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:38:52 compute-0 sudo[101867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:38:52 compute-0 podman[101904]: 2026-01-23 18:38:52.50505931 +0000 UTC m=+0.027973908 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:38:52 compute-0 podman[101904]: 2026-01-23 18:38:52.979437281 +0000 UTC m=+0.502351849 container create 9eced6ce16eee263129e965037352eb86de32a6b1c919bcdf518bf51dda62d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_albattani, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 23 18:38:52 compute-0 ceph-mon[75097]: pgmap v240: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Jan 23 18:38:52 compute-0 ceph-mon[75097]: 2.2 scrub starts
Jan 23 18:38:52 compute-0 ceph-mon[75097]: 2.2 scrub ok
Jan 23 18:38:53 compute-0 systemd[1]: Started libpod-conmon-9eced6ce16eee263129e965037352eb86de32a6b1c919bcdf518bf51dda62d08.scope.
Jan 23 18:38:53 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:38:53 compute-0 podman[101904]: 2026-01-23 18:38:53.101510402 +0000 UTC m=+0.624425000 container init 9eced6ce16eee263129e965037352eb86de32a6b1c919bcdf518bf51dda62d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_albattani, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:38:53 compute-0 podman[101904]: 2026-01-23 18:38:53.10786608 +0000 UTC m=+0.630780648 container start 9eced6ce16eee263129e965037352eb86de32a6b1c919bcdf518bf51dda62d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_albattani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:38:53 compute-0 podman[101904]: 2026-01-23 18:38:53.111583403 +0000 UTC m=+0.634497971 container attach 9eced6ce16eee263129e965037352eb86de32a6b1c919bcdf518bf51dda62d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 23 18:38:53 compute-0 suspicious_albattani[101920]: 167 167
Jan 23 18:38:53 compute-0 systemd[1]: libpod-9eced6ce16eee263129e965037352eb86de32a6b1c919bcdf518bf51dda62d08.scope: Deactivated successfully.
Jan 23 18:38:53 compute-0 podman[101904]: 2026-01-23 18:38:53.114463265 +0000 UTC m=+0.637377843 container died 9eced6ce16eee263129e965037352eb86de32a6b1c919bcdf518bf51dda62d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_albattani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:38:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b078aa610d89cfd3dbabd1dd8265f05b35d65b7821974c539afdc5d80e194b79-merged.mount: Deactivated successfully.
Jan 23 18:38:53 compute-0 podman[101904]: 2026-01-23 18:38:53.207591046 +0000 UTC m=+0.730505634 container remove 9eced6ce16eee263129e965037352eb86de32a6b1c919bcdf518bf51dda62d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 18:38:53 compute-0 systemd[1]: libpod-conmon-9eced6ce16eee263129e965037352eb86de32a6b1c919bcdf518bf51dda62d08.scope: Deactivated successfully.
Jan 23 18:38:53 compute-0 podman[101949]: 2026-01-23 18:38:53.415764893 +0000 UTC m=+0.044658094 container create 3461db135294a264e10dfc1d7307e9bfc13a6ae74236412e6fa991e91c045637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_buck, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 18:38:53 compute-0 systemd[1]: Started libpod-conmon-3461db135294a264e10dfc1d7307e9bfc13a6ae74236412e6fa991e91c045637.scope.
Jan 23 18:38:53 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 23 18:38:53 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 23 18:38:53 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba9eb3616d2e5ed1e8d5cb24d8fdec2bf67e6a73a6132ab4ad93d6dc231e7ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba9eb3616d2e5ed1e8d5cb24d8fdec2bf67e6a73a6132ab4ad93d6dc231e7ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba9eb3616d2e5ed1e8d5cb24d8fdec2bf67e6a73a6132ab4ad93d6dc231e7ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba9eb3616d2e5ed1e8d5cb24d8fdec2bf67e6a73a6132ab4ad93d6dc231e7ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:38:53 compute-0 podman[101949]: 2026-01-23 18:38:53.398988605 +0000 UTC m=+0.027881726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:38:53 compute-0 podman[101949]: 2026-01-23 18:38:53.500437443 +0000 UTC m=+0.129330574 container init 3461db135294a264e10dfc1d7307e9bfc13a6ae74236412e6fa991e91c045637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 23 18:38:53 compute-0 podman[101949]: 2026-01-23 18:38:53.510289769 +0000 UTC m=+0.139182870 container start 3461db135294a264e10dfc1d7307e9bfc13a6ae74236412e6fa991e91c045637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:38:53 compute-0 podman[101949]: 2026-01-23 18:38:53.514247417 +0000 UTC m=+0.143140548 container attach 3461db135294a264e10dfc1d7307e9bfc13a6ae74236412e6fa991e91c045637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 18:38:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:38:53 compute-0 gracious_buck[101966]: {
Jan 23 18:38:53 compute-0 gracious_buck[101966]:     "0": [
Jan 23 18:38:53 compute-0 gracious_buck[101966]:         {
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "devices": [
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "/dev/loop3"
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             ],
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "lv_name": "ceph_lv0",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "lv_size": "21470642176",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "name": "ceph_lv0",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "tags": {
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.cluster_name": "ceph",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.crush_device_class": "",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.encrypted": "0",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.objectstore": "bluestore",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.osd_id": "0",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.type": "block",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.vdo": "0",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.with_tpm": "0"
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             },
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "type": "block",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "vg_name": "ceph_vg0"
Jan 23 18:38:53 compute-0 gracious_buck[101966]:         }
Jan 23 18:38:53 compute-0 gracious_buck[101966]:     ],
Jan 23 18:38:53 compute-0 gracious_buck[101966]:     "1": [
Jan 23 18:38:53 compute-0 gracious_buck[101966]:         {
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "devices": [
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "/dev/loop4"
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             ],
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "lv_name": "ceph_lv1",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "lv_size": "21470642176",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "name": "ceph_lv1",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "tags": {
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.cluster_name": "ceph",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.crush_device_class": "",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.encrypted": "0",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.objectstore": "bluestore",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.osd_id": "1",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.type": "block",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.vdo": "0",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.with_tpm": "0"
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             },
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "type": "block",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "vg_name": "ceph_vg1"
Jan 23 18:38:53 compute-0 gracious_buck[101966]:         }
Jan 23 18:38:53 compute-0 gracious_buck[101966]:     ],
Jan 23 18:38:53 compute-0 gracious_buck[101966]:     "2": [
Jan 23 18:38:53 compute-0 gracious_buck[101966]:         {
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "devices": [
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "/dev/loop5"
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             ],
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "lv_name": "ceph_lv2",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "lv_size": "21470642176",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "name": "ceph_lv2",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "tags": {
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.cluster_name": "ceph",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.crush_device_class": "",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.encrypted": "0",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.objectstore": "bluestore",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.osd_id": "2",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.type": "block",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.vdo": "0",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:                 "ceph.with_tpm": "0"
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             },
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "type": "block",
Jan 23 18:38:53 compute-0 gracious_buck[101966]:             "vg_name": "ceph_vg2"
Jan 23 18:38:53 compute-0 gracious_buck[101966]:         }
Jan 23 18:38:53 compute-0 gracious_buck[101966]:     ]
Jan 23 18:38:53 compute-0 gracious_buck[101966]: }
Jan 23 18:38:53 compute-0 systemd[1]: libpod-3461db135294a264e10dfc1d7307e9bfc13a6ae74236412e6fa991e91c045637.scope: Deactivated successfully.
Jan 23 18:38:53 compute-0 podman[101949]: 2026-01-23 18:38:53.829309908 +0000 UTC m=+0.458203019 container died 3461db135294a264e10dfc1d7307e9bfc13a6ae74236412e6fa991e91c045637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_buck, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 23 18:38:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ba9eb3616d2e5ed1e8d5cb24d8fdec2bf67e6a73a6132ab4ad93d6dc231e7ae-merged.mount: Deactivated successfully.
Jan 23 18:38:53 compute-0 podman[101949]: 2026-01-23 18:38:53.878176766 +0000 UTC m=+0.507069857 container remove 3461db135294a264e10dfc1d7307e9bfc13a6ae74236412e6fa991e91c045637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_buck, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 23 18:38:53 compute-0 systemd[1]: libpod-conmon-3461db135294a264e10dfc1d7307e9bfc13a6ae74236412e6fa991e91c045637.scope: Deactivated successfully.
Jan 23 18:38:53 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Jan 23 18:38:53 compute-0 sudo[101867]: pam_unix(sudo:session): session closed for user root
Jan 23 18:38:53 compute-0 sudo[101993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:38:54 compute-0 sudo[101993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:38:54 compute-0 sudo[101993]: pam_unix(sudo:session): session closed for user root
Jan 23 18:38:54 compute-0 sudo[102018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:38:54 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 23 18:38:54 compute-0 sudo[102018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:38:54 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 23 18:38:54 compute-0 podman[102062]: 2026-01-23 18:38:54.356254448 +0000 UTC m=+0.042981332 container create 49a11eaa47f70d03cf42fe58664be6bbae38115615c0cab35e692dc44f4dddf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_meitner, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 18:38:54 compute-0 systemd[1]: Started libpod-conmon-49a11eaa47f70d03cf42fe58664be6bbae38115615c0cab35e692dc44f4dddf0.scope.
Jan 23 18:38:54 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:38:54 compute-0 podman[102062]: 2026-01-23 18:38:54.338160107 +0000 UTC m=+0.024887011 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:38:54 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 23 18:38:54 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 23 18:38:54 compute-0 podman[102062]: 2026-01-23 18:38:54.591421828 +0000 UTC m=+0.278148732 container init 49a11eaa47f70d03cf42fe58664be6bbae38115615c0cab35e692dc44f4dddf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_meitner, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 18:38:54 compute-0 podman[102062]: 2026-01-23 18:38:54.602587066 +0000 UTC m=+0.289313940 container start 49a11eaa47f70d03cf42fe58664be6bbae38115615c0cab35e692dc44f4dddf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_meitner, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 18:38:54 compute-0 suspicious_meitner[102085]: 167 167
Jan 23 18:38:54 compute-0 systemd[1]: libpod-49a11eaa47f70d03cf42fe58664be6bbae38115615c0cab35e692dc44f4dddf0.scope: Deactivated successfully.
Jan 23 18:38:54 compute-0 podman[102062]: 2026-01-23 18:38:54.607011676 +0000 UTC m=+0.293738570 container attach 49a11eaa47f70d03cf42fe58664be6bbae38115615c0cab35e692dc44f4dddf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 18:38:54 compute-0 conmon[102085]: conmon 49a11eaa47f70d03cf42 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49a11eaa47f70d03cf42fe58664be6bbae38115615c0cab35e692dc44f4dddf0.scope/container/memory.events
Jan 23 18:38:54 compute-0 podman[102062]: 2026-01-23 18:38:54.609013017 +0000 UTC m=+0.295739901 container died 49a11eaa47f70d03cf42fe58664be6bbae38115615c0cab35e692dc44f4dddf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_meitner, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:38:54 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 23 18:38:54 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 23 18:38:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-88d5ea81e429fb55b5f08270a12ca95de2813c1fa55586ae12e6cd065f0945c9-merged.mount: Deactivated successfully.
Jan 23 18:38:54 compute-0 podman[102062]: 2026-01-23 18:38:54.760510031 +0000 UTC m=+0.447236935 container remove 49a11eaa47f70d03cf42fe58664be6bbae38115615c0cab35e692dc44f4dddf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:38:54 compute-0 systemd[1]: libpod-conmon-49a11eaa47f70d03cf42fe58664be6bbae38115615c0cab35e692dc44f4dddf0.scope: Deactivated successfully.
Jan 23 18:38:54 compute-0 podman[102117]: 2026-01-23 18:38:54.972713139 +0000 UTC m=+0.065475922 container create d1618974ad67171abe22f09d1fc37d1aa6a4fb0deccb8fe9bf53ac29799e243f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 18:38:55 compute-0 systemd[1]: Started libpod-conmon-d1618974ad67171abe22f09d1fc37d1aa6a4fb0deccb8fe9bf53ac29799e243f.scope.
Jan 23 18:38:55 compute-0 podman[102117]: 2026-01-23 18:38:54.934797344 +0000 UTC m=+0.027560137 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:38:55 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:38:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e11782b447b3541d27d9176d287302cd6f74587d597724b206d68966ea76711/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:38:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e11782b447b3541d27d9176d287302cd6f74587d597724b206d68966ea76711/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:38:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e11782b447b3541d27d9176d287302cd6f74587d597724b206d68966ea76711/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:38:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e11782b447b3541d27d9176d287302cd6f74587d597724b206d68966ea76711/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:38:55 compute-0 ceph-mon[75097]: 4.e scrub starts
Jan 23 18:38:55 compute-0 ceph-mon[75097]: 4.e scrub ok
Jan 23 18:38:55 compute-0 ceph-mon[75097]: pgmap v241: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Jan 23 18:38:55 compute-0 ceph-mon[75097]: 10.17 scrub starts
Jan 23 18:38:55 compute-0 ceph-mon[75097]: 10.17 scrub ok
Jan 23 18:38:55 compute-0 ceph-mon[75097]: 4.8 scrub starts
Jan 23 18:38:55 compute-0 ceph-mon[75097]: 4.8 scrub ok
Jan 23 18:38:55 compute-0 podman[102117]: 2026-01-23 18:38:55.277821052 +0000 UTC m=+0.370583885 container init d1618974ad67171abe22f09d1fc37d1aa6a4fb0deccb8fe9bf53ac29799e243f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:38:55 compute-0 podman[102117]: 2026-01-23 18:38:55.283595875 +0000 UTC m=+0.376358668 container start d1618974ad67171abe22f09d1fc37d1aa6a4fb0deccb8fe9bf53ac29799e243f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_germain, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:38:55 compute-0 podman[102117]: 2026-01-23 18:38:55.289984935 +0000 UTC m=+0.382747738 container attach d1618974ad67171abe22f09d1fc37d1aa6a4fb0deccb8fe9bf53ac29799e243f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_germain, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:38:55 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 23 18:38:55 compute-0 lvm[102213]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:38:55 compute-0 lvm[102212]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:38:55 compute-0 lvm[102213]: VG ceph_vg1 finished
Jan 23 18:38:55 compute-0 lvm[102212]: VG ceph_vg0 finished
Jan 23 18:38:55 compute-0 lvm[102215]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:38:55 compute-0 lvm[102215]: VG ceph_vg2 finished
Jan 23 18:38:56 compute-0 heuristic_germain[102133]: {}
Jan 23 18:38:56 compute-0 systemd[1]: libpod-d1618974ad67171abe22f09d1fc37d1aa6a4fb0deccb8fe9bf53ac29799e243f.scope: Deactivated successfully.
Jan 23 18:38:56 compute-0 podman[102117]: 2026-01-23 18:38:56.062901544 +0000 UTC m=+1.155664317 container died d1618974ad67171abe22f09d1fc37d1aa6a4fb0deccb8fe9bf53ac29799e243f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 23 18:38:56 compute-0 systemd[1]: libpod-d1618974ad67171abe22f09d1fc37d1aa6a4fb0deccb8fe9bf53ac29799e243f.scope: Consumed 1.271s CPU time.
Jan 23 18:38:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e11782b447b3541d27d9176d287302cd6f74587d597724b206d68966ea76711-merged.mount: Deactivated successfully.
Jan 23 18:38:56 compute-0 podman[102117]: 2026-01-23 18:38:56.110281195 +0000 UTC m=+1.203043978 container remove d1618974ad67171abe22f09d1fc37d1aa6a4fb0deccb8fe9bf53ac29799e243f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 18:38:56 compute-0 systemd[1]: libpod-conmon-d1618974ad67171abe22f09d1fc37d1aa6a4fb0deccb8fe9bf53ac29799e243f.scope: Deactivated successfully.
Jan 23 18:38:56 compute-0 sudo[102018]: pam_unix(sudo:session): session closed for user root
Jan 23 18:38:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:38:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:38:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:38:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:38:56 compute-0 sudo[102230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:38:56 compute-0 sudo[102230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:38:56 compute-0 sudo[102230]: pam_unix(sudo:session): session closed for user root
Jan 23 18:38:56 compute-0 ceph-mon[75097]: 4.1b scrub starts
Jan 23 18:38:56 compute-0 ceph-mon[75097]: 4.1b scrub ok
Jan 23 18:38:56 compute-0 ceph-mon[75097]: pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 23 18:38:56 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:38:56 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:38:57 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 23 18:38:57 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 23 18:38:57 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 23 18:38:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:38:58 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 23 18:38:58 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 23 18:38:59 compute-0 ceph-mon[75097]: 4.1a scrub starts
Jan 23 18:38:59 compute-0 ceph-mon[75097]: 4.1a scrub ok
Jan 23 18:38:59 compute-0 ceph-mon[75097]: pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 23 18:38:59 compute-0 ceph-mon[75097]: 4.9 scrub starts
Jan 23 18:38:59 compute-0 ceph-mon[75097]: 4.9 scrub ok
Jan 23 18:38:59 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 23 18:38:59 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 23 18:38:59 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 23 18:39:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:39:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:39:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:39:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:39:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:39:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:39:00 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 23 18:39:00 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 23 18:39:00 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 23 18:39:00 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 23 18:39:01 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:02 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 23 18:39:03 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:39:04 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 23 18:39:05 compute-0 ceph-mon[75097]: 3.1e scrub starts
Jan 23 18:39:05 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 23 18:39:05 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 23 18:39:05 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:05 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 23 18:39:05 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 23 18:39:06 compute-0 ceph-mon[75097]: 3.1e scrub ok
Jan 23 18:39:06 compute-0 ceph-mon[75097]: pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 23 18:39:06 compute-0 ceph-mon[75097]: 8.15 scrub starts
Jan 23 18:39:06 compute-0 ceph-mon[75097]: 8.15 scrub ok
Jan 23 18:39:06 compute-0 ceph-mon[75097]: 4.5 scrub starts
Jan 23 18:39:06 compute-0 ceph-mon[75097]: 4.5 scrub ok
Jan 23 18:39:06 compute-0 ceph-mon[75097]: pgmap v245: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:06 compute-0 ceph-mon[75097]: 4.18 scrub starts
Jan 23 18:39:06 compute-0 ceph-mon[75097]: pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:06 compute-0 ceph-mon[75097]: 4.18 scrub ok
Jan 23 18:39:06 compute-0 ceph-mon[75097]: 8.e scrub starts
Jan 23 18:39:06 compute-0 ceph-mon[75097]: 8.e scrub ok
Jan 23 18:39:06 compute-0 ceph-mon[75097]: pgmap v247: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:06 compute-0 ceph-mon[75097]: 8.11 scrub starts
Jan 23 18:39:06 compute-0 ceph-mon[75097]: 8.11 scrub ok
Jan 23 18:39:07 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:08 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 23 18:39:08 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 23 18:39:08 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.c scrub starts
Jan 23 18:39:08 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.c scrub ok
Jan 23 18:39:08 compute-0 ceph-mon[75097]: pgmap v248: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:08 compute-0 ceph-mon[75097]: 4.7 scrub starts
Jan 23 18:39:08 compute-0 ceph-mon[75097]: 4.7 scrub ok
Jan 23 18:39:09 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:39:09 compute-0 ceph-mon[75097]: 8.c scrub starts
Jan 23 18:39:09 compute-0 ceph-mon[75097]: 8.c scrub ok
Jan 23 18:39:10 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 23 18:39:10 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 23 18:39:10 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 23 18:39:10 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 23 18:39:10 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 23 18:39:10 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 23 18:39:10 compute-0 ceph-mon[75097]: pgmap v249: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:10 compute-0 ceph-mon[75097]: 4.f scrub starts
Jan 23 18:39:10 compute-0 ceph-mon[75097]: 4.f scrub ok
Jan 23 18:39:11 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 23 18:39:11 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 23 18:39:11 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 23 18:39:11 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 23 18:39:11 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 23 18:39:11 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 23 18:39:11 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:12 compute-0 ceph-mon[75097]: 3.6 scrub starts
Jan 23 18:39:12 compute-0 ceph-mon[75097]: 3.6 scrub ok
Jan 23 18:39:12 compute-0 ceph-mon[75097]: 3.8 scrub starts
Jan 23 18:39:12 compute-0 ceph-mon[75097]: 3.8 scrub ok
Jan 23 18:39:12 compute-0 ceph-mon[75097]: 4.d scrub starts
Jan 23 18:39:12 compute-0 ceph-mon[75097]: 4.d scrub ok
Jan 23 18:39:12 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 23 18:39:12 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 23 18:39:13 compute-0 ceph-mon[75097]: 5.2 scrub starts
Jan 23 18:39:13 compute-0 ceph-mon[75097]: 5.2 scrub ok
Jan 23 18:39:13 compute-0 ceph-mon[75097]: 3.5 scrub starts
Jan 23 18:39:13 compute-0 ceph-mon[75097]: 3.5 scrub ok
Jan 23 18:39:13 compute-0 ceph-mon[75097]: pgmap v250: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:13 compute-0 ceph-mon[75097]: 4.2 scrub starts
Jan 23 18:39:13 compute-0 ceph-mon[75097]: 4.2 scrub ok
Jan 23 18:39:13 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 23 18:39:13 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 23 18:39:13 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 23 18:39:13 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 23 18:39:13 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:39:15 compute-0 ceph-mon[75097]: 7.3 scrub starts
Jan 23 18:39:15 compute-0 ceph-mon[75097]: 7.3 scrub ok
Jan 23 18:39:15 compute-0 ceph-mon[75097]: 5.19 scrub starts
Jan 23 18:39:15 compute-0 ceph-mon[75097]: 5.19 scrub ok
Jan 23 18:39:15 compute-0 ceph-mon[75097]: pgmap v251: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:15 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 23 18:39:15 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 23 18:39:15 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 23 18:39:15 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 23 18:39:15 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:16 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 23 18:39:16 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 23 18:39:16 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 23 18:39:17 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 23 18:39:17 compute-0 ceph-mon[75097]: 5.18 scrub starts
Jan 23 18:39:17 compute-0 ceph-mon[75097]: 5.18 scrub ok
Jan 23 18:39:17 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 23 18:39:17 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 23 18:39:17 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 23 18:39:17 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 23 18:39:17 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:18 compute-0 ceph-mon[75097]: 5.3 scrub starts
Jan 23 18:39:18 compute-0 ceph-mon[75097]: 5.3 scrub ok
Jan 23 18:39:18 compute-0 ceph-mon[75097]: pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:18 compute-0 ceph-mon[75097]: 7.c scrub starts
Jan 23 18:39:18 compute-0 ceph-mon[75097]: 2.b scrub starts
Jan 23 18:39:18 compute-0 ceph-mon[75097]: 7.c scrub ok
Jan 23 18:39:18 compute-0 ceph-mon[75097]: 2.b scrub ok
Jan 23 18:39:18 compute-0 ceph-mon[75097]: 5.1a scrub starts
Jan 23 18:39:18 compute-0 ceph-mon[75097]: 5.1a scrub ok
Jan 23 18:39:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 23 18:39:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 23 18:39:19 compute-0 ceph-mon[75097]: 8.2 scrub starts
Jan 23 18:39:19 compute-0 ceph-mon[75097]: 8.2 scrub ok
Jan 23 18:39:19 compute-0 ceph-mon[75097]: pgmap v253: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:19 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 23 18:39:19 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 23 18:39:19 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 23 18:39:19 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 23 18:39:19 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 23 18:39:19 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 23 18:39:19 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:39:20 compute-0 ceph-mon[75097]: 7.f scrub starts
Jan 23 18:39:20 compute-0 ceph-mon[75097]: 7.f scrub ok
Jan 23 18:39:20 compute-0 ceph-mon[75097]: 4.4 scrub starts
Jan 23 18:39:20 compute-0 ceph-mon[75097]: 4.4 scrub ok
Jan 23 18:39:20 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 23 18:39:20 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 23 18:39:21 compute-0 ceph-mon[75097]: 3.7 scrub starts
Jan 23 18:39:21 compute-0 ceph-mon[75097]: 3.7 scrub ok
Jan 23 18:39:21 compute-0 ceph-mon[75097]: 3.a scrub starts
Jan 23 18:39:21 compute-0 ceph-mon[75097]: 3.a scrub ok
Jan 23 18:39:21 compute-0 ceph-mon[75097]: pgmap v254: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:21 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 23 18:39:21 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 23 18:39:21 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 23 18:39:21 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 23 18:39:21 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 23 18:39:21 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 23 18:39:21 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:22 compute-0 ceph-mon[75097]: 7.1a scrub starts
Jan 23 18:39:22 compute-0 ceph-mon[75097]: 7.1a scrub ok
Jan 23 18:39:22 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 23 18:39:22 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 23 18:39:23 compute-0 ceph-mon[75097]: 3.9 scrub starts
Jan 23 18:39:23 compute-0 ceph-mon[75097]: 3.9 scrub ok
Jan 23 18:39:23 compute-0 ceph-mon[75097]: 7.1 scrub starts
Jan 23 18:39:23 compute-0 ceph-mon[75097]: 7.1 scrub ok
Jan 23 18:39:23 compute-0 ceph-mon[75097]: 2.1b scrub starts
Jan 23 18:39:23 compute-0 ceph-mon[75097]: 2.1b scrub ok
Jan 23 18:39:23 compute-0 ceph-mon[75097]: pgmap v255: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:23 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:24 compute-0 ceph-mon[75097]: 2.8 scrub starts
Jan 23 18:39:24 compute-0 ceph-mon[75097]: 2.8 scrub ok
Jan 23 18:39:24 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Jan 23 18:39:24 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Jan 23 18:39:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:39:25 compute-0 ceph-mon[75097]: pgmap v256: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:25 compute-0 ceph-mon[75097]: 10.1 scrub starts
Jan 23 18:39:25 compute-0 ceph-mon[75097]: 10.1 scrub ok
Jan 23 18:39:25 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Jan 23 18:39:25 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Jan 23 18:39:25 compute-0 sudo[101521]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:25 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:26 compute-0 ceph-mon[75097]: 10.1e scrub starts
Jan 23 18:39:26 compute-0 ceph-mon[75097]: 10.1e scrub ok
Jan 23 18:39:26 compute-0 ceph-mon[75097]: pgmap v257: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:26 compute-0 sudo[102433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpsgbavlalaivbvkanrhkgrvlytvbvmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193566.0847514-132-94082855049939/AnsiballZ_command.py'
Jan 23 18:39:26 compute-0 sudo[102433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:26 compute-0 python3.9[102435]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:39:26 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 23 18:39:26 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 23 18:39:27 compute-0 sudo[102433]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:27 compute-0 ceph-mon[75097]: 2.16 scrub starts
Jan 23 18:39:27 compute-0 ceph-mon[75097]: 2.16 scrub ok
Jan 23 18:39:27 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:27 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 23 18:39:27 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 23 18:39:27 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 23 18:39:27 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 23 18:39:27 compute-0 sudo[102720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzyusmrgqdvqppfmyclteobonlezbmgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193567.3869328-140-210054501368716/AnsiballZ_selinux.py'
Jan 23 18:39:27 compute-0 sudo[102720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:28 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 23 18:39:28 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 23 18:39:28 compute-0 python3.9[102722]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 23 18:39:28 compute-0 ceph-mon[75097]: pgmap v258: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:28 compute-0 ceph-mon[75097]: 7.13 scrub starts
Jan 23 18:39:28 compute-0 sudo[102720]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:28 compute-0 ceph-mon[75097]: 7.13 scrub ok
Jan 23 18:39:28 compute-0 sudo[102872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpyqxnwabevkxducuipbhzqpzrzjdjla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193568.5848-151-141118448158373/AnsiballZ_command.py'
Jan 23 18:39:28 compute-0 sudo[102872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:39:28
Jan 23 18:39:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:39:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:39:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['vms', 'images', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log']
Jan 23 18:39:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:39:29 compute-0 python3.9[102874]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 23 18:39:29 compute-0 sudo[102872]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:29 compute-0 ceph-mon[75097]: 8.d scrub starts
Jan 23 18:39:29 compute-0 ceph-mon[75097]: 8.d scrub ok
Jan 23 18:39:29 compute-0 ceph-mon[75097]: 10.10 scrub starts
Jan 23 18:39:29 compute-0 ceph-mon[75097]: 10.10 scrub ok
Jan 23 18:39:29 compute-0 sudo[103024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnyutsbaconojshgismskpiejrjfrjtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193569.269259-159-41120080801289/AnsiballZ_file.py'
Jan 23 18:39:29 compute-0 sudo[103024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:29 compute-0 python3.9[103026]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:39:29 compute-0 sudo[103024]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:29 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:39:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:39:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:39:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:39:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:39:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:39:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:39:30 compute-0 ceph-mon[75097]: pgmap v259: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:30 compute-0 sudo[103176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-virxmyprbmcvlcpqjpweearhldgxvhgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193570.007831-167-244117723603709/AnsiballZ_mount.py'
Jan 23 18:39:30 compute-0 sudo[103176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:30 compute-0 python3.9[103178]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 23 18:39:30 compute-0 sudo[103176]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:39:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:39:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:39:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:39:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:39:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:39:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:39:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:39:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:39:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:39:30 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 23 18:39:30 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 23 18:39:31 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 23 18:39:31 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 23 18:39:31 compute-0 ceph-mon[75097]: 3.15 scrub starts
Jan 23 18:39:31 compute-0 ceph-mon[75097]: 3.15 scrub ok
Jan 23 18:39:31 compute-0 sudo[103328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flaqfwrhfnadhypurwjjfgkbzuscddpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193571.561433-195-173666199625613/AnsiballZ_file.py'
Jan 23 18:39:31 compute-0 sudo[103328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:31 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:32 compute-0 python3.9[103330]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:39:32 compute-0 sudo[103328]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:32 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 23 18:39:32 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 23 18:39:32 compute-0 ceph-mon[75097]: 7.2 scrub starts
Jan 23 18:39:32 compute-0 ceph-mon[75097]: 7.2 scrub ok
Jan 23 18:39:32 compute-0 ceph-mon[75097]: pgmap v260: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:32 compute-0 ceph-mon[75097]: 3.17 scrub starts
Jan 23 18:39:32 compute-0 ceph-mon[75097]: 3.17 scrub ok
Jan 23 18:39:32 compute-0 sudo[103480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oybvewocczaeljvgczvxnqxwitredqhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193572.2484941-203-225299426265656/AnsiballZ_stat.py'
Jan 23 18:39:32 compute-0 sudo[103480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:32 compute-0 python3.9[103482]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:39:32 compute-0 sudo[103480]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:32 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 23 18:39:32 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 23 18:39:32 compute-0 sudo[103558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubqyoygxwtmoiemgurdbalagrtizjyow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193572.2484941-203-225299426265656/AnsiballZ_file.py'
Jan 23 18:39:32 compute-0 sudo[103558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:33 compute-0 python3.9[103560]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:39:33 compute-0 sudo[103558]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:33 compute-0 ceph-mon[75097]: 10.13 scrub starts
Jan 23 18:39:33 compute-0 ceph-mon[75097]: 10.13 scrub ok
Jan 23 18:39:33 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:33 compute-0 sudo[103710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpbwgyjsawwnmqicckeroofizbfblrsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193573.6531515-224-193334544639680/AnsiballZ_stat.py'
Jan 23 18:39:33 compute-0 sudo[103710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:34 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Jan 23 18:39:34 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Jan 23 18:39:34 compute-0 python3.9[103712]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:39:34 compute-0 sudo[103710]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:34 compute-0 ceph-mon[75097]: pgmap v261: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:34 compute-0 ceph-mon[75097]: 8.1f scrub starts
Jan 23 18:39:34 compute-0 ceph-mon[75097]: 8.1f scrub ok
Jan 23 18:39:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:39:35 compute-0 sudo[103864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwdjstpznphrkpnhuuoqyvstjtyvucqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193574.7145903-237-69734003804419/AnsiballZ_getent.py'
Jan 23 18:39:35 compute-0 sudo[103864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:35 compute-0 python3.9[103866]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 23 18:39:35 compute-0 sudo[103864]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:35 compute-0 sudo[104017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrsklnbwvjbawrhzbnysleznpcktytiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193575.6478496-247-18860171419772/AnsiballZ_getent.py'
Jan 23 18:39:35 compute-0 sudo[104017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:35 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:36 compute-0 python3.9[104019]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 23 18:39:36 compute-0 sudo[104017]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:36 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 23 18:39:36 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 23 18:39:36 compute-0 sudo[104170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdnvhpiiutyxwakbpqonynyrkjrbhmwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193576.2773323-255-264379292787384/AnsiballZ_group.py'
Jan 23 18:39:36 compute-0 sudo[104170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:36 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Jan 23 18:39:36 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Jan 23 18:39:36 compute-0 python3.9[104172]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 18:39:36 compute-0 sudo[104170]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:36 compute-0 ceph-mon[75097]: pgmap v262: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:36 compute-0 ceph-mon[75097]: 10.11 scrub starts
Jan 23 18:39:36 compute-0 ceph-mon[75097]: 10.11 scrub ok
Jan 23 18:39:37 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 23 18:39:37 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 23 18:39:37 compute-0 sudo[104322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwlwmuukbhtwvedpuufjznyvbregaaar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193577.1006453-264-22552390065681/AnsiballZ_file.py'
Jan 23 18:39:37 compute-0 sudo[104322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:37 compute-0 python3.9[104324]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 23 18:39:37 compute-0 sudo[104322]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:37 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:37 compute-0 ceph-mon[75097]: 2.13 scrub starts
Jan 23 18:39:37 compute-0 ceph-mon[75097]: 2.13 scrub ok
Jan 23 18:39:38 compute-0 sudo[104474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laaoquicpmemsrztalfrhyihrehatfvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193577.8764975-275-171572201217272/AnsiballZ_dnf.py'
Jan 23 18:39:38 compute-0 sudo[104474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:38 compute-0 python3.9[104476]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:39:38 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 23 18:39:38 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 23 18:39:39 compute-0 ceph-mon[75097]: 3.12 scrub starts
Jan 23 18:39:39 compute-0 ceph-mon[75097]: 3.12 scrub ok
Jan 23 18:39:39 compute-0 ceph-mon[75097]: pgmap v263: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:39 compute-0 ceph-mon[75097]: 5.1 scrub starts
Jan 23 18:39:39 compute-0 ceph-mon[75097]: 5.1 scrub ok
Jan 23 18:39:39 compute-0 sudo[104474]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:39 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 23 18:39:39 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 23 18:39:39 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:39:39 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 23 18:39:39 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 23 18:39:40 compute-0 ceph-mon[75097]: 5.1d scrub starts
Jan 23 18:39:40 compute-0 ceph-mon[75097]: 5.1d scrub ok
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:39:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:39:40 compute-0 sudo[104627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsoihlkeyppejpttdtcnterdxdaguxlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193579.9019017-283-83020887919872/AnsiballZ_file.py'
Jan 23 18:39:40 compute-0 sudo[104627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:40 compute-0 python3.9[104629]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:39:40 compute-0 sudo[104627]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:40 compute-0 sudo[104779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqqycdwwxkstzqvauncpdygtbtiwpqln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193580.644651-291-272462269839836/AnsiballZ_stat.py'
Jan 23 18:39:40 compute-0 sudo[104779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:41 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 23 18:39:41 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 23 18:39:41 compute-0 ceph-mon[75097]: pgmap v264: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:41 compute-0 ceph-mon[75097]: 7.8 scrub starts
Jan 23 18:39:41 compute-0 ceph-mon[75097]: 7.8 scrub ok
Jan 23 18:39:41 compute-0 python3.9[104781]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:39:41 compute-0 sudo[104779]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:41 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 23 18:39:41 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 23 18:39:41 compute-0 sudo[104857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wucwywyvtjvbykrjvzgwlobxfuayighl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193580.644651-291-272462269839836/AnsiballZ_file.py'
Jan 23 18:39:41 compute-0 sudo[104857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:41 compute-0 python3.9[104859]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:39:41 compute-0 sudo[104857]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:41 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:42 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 23 18:39:42 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 23 18:39:42 compute-0 ceph-mon[75097]: 8.4 scrub starts
Jan 23 18:39:42 compute-0 ceph-mon[75097]: 8.4 scrub ok
Jan 23 18:39:42 compute-0 sudo[105009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsppnvxacwlpfylcfrdkjkyxkiaaxizd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193581.8557644-304-33303034701051/AnsiballZ_stat.py'
Jan 23 18:39:42 compute-0 sudo[105009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:42 compute-0 python3.9[105011]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:39:42 compute-0 sudo[105009]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:42 compute-0 sudo[105087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okpmuvuqmvdkdhcdqrabhyuxplydheat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193581.8557644-304-33303034701051/AnsiballZ_file.py'
Jan 23 18:39:42 compute-0 sudo[105087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:42 compute-0 python3.9[105089]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:39:42 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.f scrub starts
Jan 23 18:39:42 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.f scrub ok
Jan 23 18:39:42 compute-0 sudo[105087]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:43 compute-0 ceph-mon[75097]: 5.15 scrub starts
Jan 23 18:39:43 compute-0 ceph-mon[75097]: 5.15 scrub ok
Jan 23 18:39:43 compute-0 ceph-mon[75097]: pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:43 compute-0 ceph-mon[75097]: 7.a scrub starts
Jan 23 18:39:43 compute-0 ceph-mon[75097]: 7.a scrub ok
Jan 23 18:39:43 compute-0 ceph-mon[75097]: 10.f scrub starts
Jan 23 18:39:43 compute-0 ceph-mon[75097]: 10.f scrub ok
Jan 23 18:39:43 compute-0 sudo[105239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtbyyhhakdvnytvevqzdohqifrjklwmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193583.1810157-319-112271152854555/AnsiballZ_dnf.py'
Jan 23 18:39:43 compute-0 sudo[105239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:43 compute-0 python3.9[105241]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:39:43 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:44 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 23 18:39:44 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 23 18:39:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:39:44 compute-0 sudo[105239]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:45 compute-0 ceph-mon[75097]: pgmap v266: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:45 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 23 18:39:45 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 23 18:39:45 compute-0 python3.9[105392]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:39:45 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 23 18:39:45 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 23 18:39:45 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:46 compute-0 ceph-mon[75097]: 8.1a scrub starts
Jan 23 18:39:46 compute-0 ceph-mon[75097]: 8.1a scrub ok
Jan 23 18:39:46 compute-0 ceph-mon[75097]: 2.7 scrub starts
Jan 23 18:39:46 compute-0 ceph-mon[75097]: 2.7 scrub ok
Jan 23 18:39:46 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 23 18:39:46 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 23 18:39:46 compute-0 python3.9[105544]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 23 18:39:47 compute-0 ceph-mon[75097]: 5.14 scrub starts
Jan 23 18:39:47 compute-0 ceph-mon[75097]: 5.14 scrub ok
Jan 23 18:39:47 compute-0 ceph-mon[75097]: pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:47 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 23 18:39:47 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 23 18:39:47 compute-0 python3.9[105694]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:39:47 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:48 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 23 18:39:48 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 23 18:39:48 compute-0 ceph-mon[75097]: 2.11 scrub starts
Jan 23 18:39:48 compute-0 ceph-mon[75097]: 2.11 scrub ok
Jan 23 18:39:48 compute-0 sudo[105844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcvtfqtgankdrsilhozxeocroemjjqgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193587.6676536-360-76667499139518/AnsiballZ_systemd.py'
Jan 23 18:39:48 compute-0 sudo[105844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:48 compute-0 python3.9[105846]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:39:48 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 23 18:39:48 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 23 18:39:48 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 23 18:39:48 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 23 18:39:48 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 23 18:39:49 compute-0 sudo[105844]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:49 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 23 18:39:49 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 23 18:39:49 compute-0 ceph-mon[75097]: 10.7 scrub starts
Jan 23 18:39:49 compute-0 ceph-mon[75097]: 10.7 scrub ok
Jan 23 18:39:49 compute-0 ceph-mon[75097]: pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:49 compute-0 ceph-mon[75097]: 7.e scrub starts
Jan 23 18:39:49 compute-0 ceph-mon[75097]: 7.e scrub ok
Jan 23 18:39:49 compute-0 python3.9[106007]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 23 18:39:49 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:39:50 compute-0 ceph-mon[75097]: 3.e scrub starts
Jan 23 18:39:50 compute-0 ceph-mon[75097]: 3.e scrub ok
Jan 23 18:39:51 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 23 18:39:51 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 23 18:39:51 compute-0 ceph-mon[75097]: pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:51 compute-0 sudo[106157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yusmrxxkyyzfjpzlqgcpaanpaxbrnprn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193591.3323872-417-154678805046242/AnsiballZ_systemd.py'
Jan 23 18:39:51 compute-0 sudo[106157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:51 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:51 compute-0 python3.9[106159]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:39:52 compute-0 sudo[106157]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:52 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 23 18:39:52 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 23 18:39:52 compute-0 sudo[106311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkwjoofwklmcgucnbplyhgpulbhixnrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193592.187229-417-132664854357039/AnsiballZ_systemd.py'
Jan 23 18:39:52 compute-0 sudo[106311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:39:52 compute-0 ceph-mon[75097]: 3.11 scrub starts
Jan 23 18:39:52 compute-0 ceph-mon[75097]: 3.11 scrub ok
Jan 23 18:39:52 compute-0 ceph-mon[75097]: pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:52 compute-0 python3.9[106313]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:39:52 compute-0 sudo[106311]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:53 compute-0 sshd-session[99609]: Connection closed by 192.168.122.30 port 53952
Jan 23 18:39:53 compute-0 sshd-session[99606]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:39:53 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Jan 23 18:39:53 compute-0 systemd[1]: session-34.scope: Consumed 1min 6.775s CPU time.
Jan 23 18:39:53 compute-0 systemd-logind[792]: Session 34 logged out. Waiting for processes to exit.
Jan 23 18:39:53 compute-0 systemd-logind[792]: Removed session 34.
Jan 23 18:39:53 compute-0 ceph-mon[75097]: 7.1b scrub starts
Jan 23 18:39:53 compute-0 ceph-mon[75097]: 7.1b scrub ok
Jan 23 18:39:53 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:53 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 23 18:39:53 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 23 18:39:54 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 23 18:39:54 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 23 18:39:54 compute-0 ceph-mon[75097]: pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:39:55 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 23 18:39:55 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 23 18:39:55 compute-0 ceph-mon[75097]: 2.9 scrub starts
Jan 23 18:39:55 compute-0 ceph-mon[75097]: 2.9 scrub ok
Jan 23 18:39:55 compute-0 ceph-mon[75097]: 8.14 scrub starts
Jan 23 18:39:55 compute-0 ceph-mon[75097]: 8.14 scrub ok
Jan 23 18:39:55 compute-0 ceph-mon[75097]: 7.15 scrub starts
Jan 23 18:39:55 compute-0 ceph-mon[75097]: 7.15 scrub ok
Jan 23 18:39:55 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 23 18:39:55 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 23 18:39:55 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:56 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 23 18:39:56 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 23 18:39:56 compute-0 sudo[106341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:39:56 compute-0 sudo[106341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:39:56 compute-0 sudo[106341]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:56 compute-0 sudo[106366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:39:56 compute-0 sudo[106366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:39:56 compute-0 ceph-mon[75097]: 5.c scrub starts
Jan 23 18:39:56 compute-0 ceph-mon[75097]: 5.c scrub ok
Jan 23 18:39:56 compute-0 ceph-mon[75097]: pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:56 compute-0 ceph-mon[75097]: 7.5 scrub starts
Jan 23 18:39:56 compute-0 ceph-mon[75097]: 7.5 scrub ok
Jan 23 18:39:56 compute-0 sudo[106366]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:39:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:39:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:39:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:39:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:39:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:39:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:39:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:39:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:39:57 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:39:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:39:57 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:39:57 compute-0 sudo[106422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:39:57 compute-0 sudo[106422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:39:57 compute-0 sudo[106422]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:57 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 23 18:39:57 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 23 18:39:57 compute-0 sudo[106447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:39:57 compute-0 sudo[106447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:39:57 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 23 18:39:57 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 23 18:39:57 compute-0 podman[106484]: 2026-01-23 18:39:57.447963458 +0000 UTC m=+0.062052437 container create 42bc09358c28ef2f2108fc6c5afdae4244ff697368b801b6689a4ccec2201e9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_keldysh, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 23 18:39:57 compute-0 systemd[76480]: Created slice User Background Tasks Slice.
Jan 23 18:39:57 compute-0 systemd[76480]: Starting Cleanup of User's Temporary Files and Directories...
Jan 23 18:39:57 compute-0 systemd[76480]: Finished Cleanup of User's Temporary Files and Directories.
Jan 23 18:39:57 compute-0 systemd[1]: Started libpod-conmon-42bc09358c28ef2f2108fc6c5afdae4244ff697368b801b6689a4ccec2201e9d.scope.
Jan 23 18:39:57 compute-0 podman[106484]: 2026-01-23 18:39:57.414045953 +0000 UTC m=+0.028134992 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:39:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:39:57 compute-0 podman[106484]: 2026-01-23 18:39:57.562459041 +0000 UTC m=+0.176548030 container init 42bc09358c28ef2f2108fc6c5afdae4244ff697368b801b6689a4ccec2201e9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_keldysh, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 18:39:57 compute-0 podman[106484]: 2026-01-23 18:39:57.569484726 +0000 UTC m=+0.183573675 container start 42bc09358c28ef2f2108fc6c5afdae4244ff697368b801b6689a4ccec2201e9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 23 18:39:57 compute-0 gallant_keldysh[106501]: 167 167
Jan 23 18:39:57 compute-0 systemd[1]: libpod-42bc09358c28ef2f2108fc6c5afdae4244ff697368b801b6689a4ccec2201e9d.scope: Deactivated successfully.
Jan 23 18:39:57 compute-0 podman[106484]: 2026-01-23 18:39:57.577167307 +0000 UTC m=+0.191256256 container attach 42bc09358c28ef2f2108fc6c5afdae4244ff697368b801b6689a4ccec2201e9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_keldysh, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:39:57 compute-0 podman[106484]: 2026-01-23 18:39:57.578027509 +0000 UTC m=+0.192116478 container died 42bc09358c28ef2f2108fc6c5afdae4244ff697368b801b6689a4ccec2201e9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:39:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2e196cfd1ddc34096badaf6f4700f467bd55dc4139d6599e2711d44e369999e-merged.mount: Deactivated successfully.
Jan 23 18:39:57 compute-0 podman[106484]: 2026-01-23 18:39:57.630660461 +0000 UTC m=+0.244749410 container remove 42bc09358c28ef2f2108fc6c5afdae4244ff697368b801b6689a4ccec2201e9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_keldysh, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:39:57 compute-0 systemd[1]: libpod-conmon-42bc09358c28ef2f2108fc6c5afdae4244ff697368b801b6689a4ccec2201e9d.scope: Deactivated successfully.
Jan 23 18:39:57 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:39:57 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:39:57 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:39:57 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:39:57 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:39:57 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:39:57 compute-0 ceph-mon[75097]: 8.1c scrub starts
Jan 23 18:39:57 compute-0 ceph-mon[75097]: 8.1c scrub ok
Jan 23 18:39:57 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 23 18:39:57 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 23 18:39:57 compute-0 podman[106526]: 2026-01-23 18:39:57.82726956 +0000 UTC m=+0.052293734 container create 1e965afe908d644e7789519ecc9abbd24fec0e03ccd5d12298c84105f3e275d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:39:57 compute-0 systemd[1]: Started libpod-conmon-1e965afe908d644e7789519ecc9abbd24fec0e03ccd5d12298c84105f3e275d5.scope.
Jan 23 18:39:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:39:57 compute-0 podman[106526]: 2026-01-23 18:39:57.808773539 +0000 UTC m=+0.033797753 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:39:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b86371d7334d3358c49e3bca9f160b0f2be5786c37927495aa106cad2c3809/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:39:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b86371d7334d3358c49e3bca9f160b0f2be5786c37927495aa106cad2c3809/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:39:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b86371d7334d3358c49e3bca9f160b0f2be5786c37927495aa106cad2c3809/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:39:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b86371d7334d3358c49e3bca9f160b0f2be5786c37927495aa106cad2c3809/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:39:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b86371d7334d3358c49e3bca9f160b0f2be5786c37927495aa106cad2c3809/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:39:57 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:57 compute-0 podman[106526]: 2026-01-23 18:39:57.937018324 +0000 UTC m=+0.162042508 container init 1e965afe908d644e7789519ecc9abbd24fec0e03ccd5d12298c84105f3e275d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 18:39:57 compute-0 podman[106526]: 2026-01-23 18:39:57.949118046 +0000 UTC m=+0.174142220 container start 1e965afe908d644e7789519ecc9abbd24fec0e03ccd5d12298c84105f3e275d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:39:57 compute-0 podman[106526]: 2026-01-23 18:39:57.953408613 +0000 UTC m=+0.178432797 container attach 1e965afe908d644e7789519ecc9abbd24fec0e03ccd5d12298c84105f3e275d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 23 18:39:58 compute-0 wizardly_hertz[106543]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:39:58 compute-0 wizardly_hertz[106543]: --> All data devices are unavailable
Jan 23 18:39:58 compute-0 podman[106526]: 2026-01-23 18:39:58.496011084 +0000 UTC m=+0.721035268 container died 1e965afe908d644e7789519ecc9abbd24fec0e03ccd5d12298c84105f3e275d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:39:58 compute-0 systemd[1]: libpod-1e965afe908d644e7789519ecc9abbd24fec0e03ccd5d12298c84105f3e275d5.scope: Deactivated successfully.
Jan 23 18:39:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-82b86371d7334d3358c49e3bca9f160b0f2be5786c37927495aa106cad2c3809-merged.mount: Deactivated successfully.
Jan 23 18:39:58 compute-0 podman[106526]: 2026-01-23 18:39:58.54768931 +0000 UTC m=+0.772713494 container remove 1e965afe908d644e7789519ecc9abbd24fec0e03ccd5d12298c84105f3e275d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 23 18:39:58 compute-0 systemd[1]: libpod-conmon-1e965afe908d644e7789519ecc9abbd24fec0e03ccd5d12298c84105f3e275d5.scope: Deactivated successfully.
Jan 23 18:39:58 compute-0 sudo[106447]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:58 compute-0 sudo[106575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:39:58 compute-0 sudo[106575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:39:58 compute-0 sudo[106575]: pam_unix(sudo:session): session closed for user root
Jan 23 18:39:58 compute-0 sudo[106600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:39:58 compute-0 ceph-mon[75097]: 10.16 scrub starts
Jan 23 18:39:58 compute-0 ceph-mon[75097]: 10.16 scrub ok
Jan 23 18:39:58 compute-0 sudo[106600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:39:58 compute-0 ceph-mon[75097]: 2.a scrub starts
Jan 23 18:39:58 compute-0 ceph-mon[75097]: 2.a scrub ok
Jan 23 18:39:58 compute-0 ceph-mon[75097]: pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:39:58 compute-0 sshd-session[106625]: Accepted publickey for zuul from 192.168.122.30 port 37264 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:39:58 compute-0 systemd-logind[792]: New session 35 of user zuul.
Jan 23 18:39:58 compute-0 systemd[1]: Started Session 35 of User zuul.
Jan 23 18:39:58 compute-0 sshd-session[106625]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:39:59 compute-0 podman[106667]: 2026-01-23 18:39:59.003632552 +0000 UTC m=+0.038368397 container create 0f28e683252ead7e11f116b3a06ef4043dcaf6f1427933775734e5f15461e9d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_black, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:39:59 compute-0 systemd[1]: Started libpod-conmon-0f28e683252ead7e11f116b3a06ef4043dcaf6f1427933775734e5f15461e9d9.scope.
Jan 23 18:39:59 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:39:59 compute-0 podman[106667]: 2026-01-23 18:39:59.077271627 +0000 UTC m=+0.112007442 container init 0f28e683252ead7e11f116b3a06ef4043dcaf6f1427933775734e5f15461e9d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_black, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 23 18:39:59 compute-0 podman[106667]: 2026-01-23 18:39:58.987434539 +0000 UTC m=+0.022170364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:39:59 compute-0 podman[106667]: 2026-01-23 18:39:59.089157543 +0000 UTC m=+0.123893378 container start 0f28e683252ead7e11f116b3a06ef4043dcaf6f1427933775734e5f15461e9d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_black, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:39:59 compute-0 upbeat_black[106712]: 167 167
Jan 23 18:39:59 compute-0 systemd[1]: libpod-0f28e683252ead7e11f116b3a06ef4043dcaf6f1427933775734e5f15461e9d9.scope: Deactivated successfully.
Jan 23 18:39:59 compute-0 conmon[106712]: conmon 0f28e683252ead7e11f1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f28e683252ead7e11f116b3a06ef4043dcaf6f1427933775734e5f15461e9d9.scope/container/memory.events
Jan 23 18:39:59 compute-0 podman[106667]: 2026-01-23 18:39:59.094971968 +0000 UTC m=+0.129707853 container attach 0f28e683252ead7e11f116b3a06ef4043dcaf6f1427933775734e5f15461e9d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 23 18:39:59 compute-0 podman[106667]: 2026-01-23 18:39:59.095911371 +0000 UTC m=+0.130647236 container died 0f28e683252ead7e11f116b3a06ef4043dcaf6f1427933775734e5f15461e9d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:39:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d9ed621af8d8c5d4897ade5b49e8e26e7a51f9af90d23c7f162cca919e5ac31-merged.mount: Deactivated successfully.
Jan 23 18:39:59 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 23 18:39:59 compute-0 podman[106667]: 2026-01-23 18:39:59.153078706 +0000 UTC m=+0.187814521 container remove 0f28e683252ead7e11f116b3a06ef4043dcaf6f1427933775734e5f15461e9d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_black, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 23 18:39:59 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 23 18:39:59 compute-0 systemd[1]: libpod-conmon-0f28e683252ead7e11f116b3a06ef4043dcaf6f1427933775734e5f15461e9d9.scope: Deactivated successfully.
Jan 23 18:39:59 compute-0 podman[106743]: 2026-01-23 18:39:59.369650053 +0000 UTC m=+0.052323376 container create 324553782fdf35a1ead1547194dde8f302f831b46af6a0a9a400c757f50113e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:39:59 compute-0 systemd[1]: Started libpod-conmon-324553782fdf35a1ead1547194dde8f302f831b46af6a0a9a400c757f50113e7.scope.
Jan 23 18:39:59 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:39:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c29c5eb69bc062dfb884337a4b194f19fe5dc3d10dc97bca678b61d688601d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:39:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c29c5eb69bc062dfb884337a4b194f19fe5dc3d10dc97bca678b61d688601d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:39:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c29c5eb69bc062dfb884337a4b194f19fe5dc3d10dc97bca678b61d688601d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:39:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c29c5eb69bc062dfb884337a4b194f19fe5dc3d10dc97bca678b61d688601d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:39:59 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Jan 23 18:39:59 compute-0 podman[106743]: 2026-01-23 18:39:59.434723704 +0000 UTC m=+0.117397027 container init 324553782fdf35a1ead1547194dde8f302f831b46af6a0a9a400c757f50113e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keller, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:39:59 compute-0 podman[106743]: 2026-01-23 18:39:59.44059762 +0000 UTC m=+0.123270923 container start 324553782fdf35a1ead1547194dde8f302f831b46af6a0a9a400c757f50113e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keller, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:39:59 compute-0 podman[106743]: 2026-01-23 18:39:59.34788334 +0000 UTC m=+0.030556693 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:39:59 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Jan 23 18:39:59 compute-0 serene_keller[106798]: {
Jan 23 18:39:59 compute-0 serene_keller[106798]:     "0": [
Jan 23 18:39:59 compute-0 serene_keller[106798]:         {
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "devices": [
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "/dev/loop3"
Jan 23 18:39:59 compute-0 serene_keller[106798]:             ],
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "lv_name": "ceph_lv0",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "lv_size": "21470642176",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "name": "ceph_lv0",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "tags": {
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.cluster_name": "ceph",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.crush_device_class": "",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.encrypted": "0",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.objectstore": "bluestore",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.osd_id": "0",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.type": "block",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.vdo": "0",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.with_tpm": "0"
Jan 23 18:39:59 compute-0 serene_keller[106798]:             },
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "type": "block",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "vg_name": "ceph_vg0"
Jan 23 18:39:59 compute-0 serene_keller[106798]:         }
Jan 23 18:39:59 compute-0 serene_keller[106798]:     ],
Jan 23 18:39:59 compute-0 serene_keller[106798]:     "1": [
Jan 23 18:39:59 compute-0 serene_keller[106798]:         {
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "devices": [
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "/dev/loop4"
Jan 23 18:39:59 compute-0 serene_keller[106798]:             ],
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "lv_name": "ceph_lv1",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "lv_size": "21470642176",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "name": "ceph_lv1",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "tags": {
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.cluster_name": "ceph",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.crush_device_class": "",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.encrypted": "0",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.objectstore": "bluestore",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.osd_id": "1",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.type": "block",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.vdo": "0",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.with_tpm": "0"
Jan 23 18:39:59 compute-0 serene_keller[106798]:             },
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "type": "block",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "vg_name": "ceph_vg1"
Jan 23 18:39:59 compute-0 serene_keller[106798]:         }
Jan 23 18:39:59 compute-0 serene_keller[106798]:     ],
Jan 23 18:39:59 compute-0 serene_keller[106798]:     "2": [
Jan 23 18:39:59 compute-0 serene_keller[106798]:         {
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "devices": [
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "/dev/loop5"
Jan 23 18:39:59 compute-0 serene_keller[106798]:             ],
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "lv_name": "ceph_lv2",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "lv_size": "21470642176",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "name": "ceph_lv2",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "tags": {
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.cluster_name": "ceph",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.crush_device_class": "",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.encrypted": "0",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.objectstore": "bluestore",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.osd_id": "2",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.type": "block",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.vdo": "0",
Jan 23 18:39:59 compute-0 serene_keller[106798]:                 "ceph.with_tpm": "0"
Jan 23 18:39:59 compute-0 serene_keller[106798]:             },
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "type": "block",
Jan 23 18:39:59 compute-0 serene_keller[106798]:             "vg_name": "ceph_vg2"
Jan 23 18:39:59 compute-0 serene_keller[106798]:         }
Jan 23 18:39:59 compute-0 serene_keller[106798]:     ]
Jan 23 18:39:59 compute-0 serene_keller[106798]: }
Jan 23 18:39:59 compute-0 systemd[1]: libpod-324553782fdf35a1ead1547194dde8f302f831b46af6a0a9a400c757f50113e7.scope: Deactivated successfully.
Jan 23 18:39:59 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:40:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:40:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:40:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:40:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:40:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:40:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:40:00 compute-0 ceph-mon[75097]: 8.12 scrub starts
Jan 23 18:40:00 compute-0 ceph-mon[75097]: 8.12 scrub ok
Jan 23 18:40:00 compute-0 podman[106743]: 2026-01-23 18:40:00.976370639 +0000 UTC m=+1.659044032 container attach 324553782fdf35a1ead1547194dde8f302f831b46af6a0a9a400c757f50113e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 23 18:40:00 compute-0 podman[106743]: 2026-01-23 18:40:00.977831215 +0000 UTC m=+1.660504548 container died 324553782fdf35a1ead1547194dde8f302f831b46af6a0a9a400c757f50113e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keller, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 23 18:40:00 compute-0 python3.9[106855]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:40:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9c29c5eb69bc062dfb884337a4b194f19fe5dc3d10dc97bca678b61d688601d-merged.mount: Deactivated successfully.
Jan 23 18:40:01 compute-0 podman[106743]: 2026-01-23 18:40:01.061548181 +0000 UTC m=+1.744221484 container remove 324553782fdf35a1ead1547194dde8f302f831b46af6a0a9a400c757f50113e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:40:01 compute-0 systemd[1]: libpod-conmon-324553782fdf35a1ead1547194dde8f302f831b46af6a0a9a400c757f50113e7.scope: Deactivated successfully.
Jan 23 18:40:01 compute-0 sudo[106600]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:01 compute-0 sudo[106876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:40:01 compute-0 sudo[106876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:40:01 compute-0 sudo[106876]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:01 compute-0 sudo[106901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:40:01 compute-0 sudo[106901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:40:01 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 23 18:40:01 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 23 18:40:01 compute-0 podman[106962]: 2026-01-23 18:40:01.546992448 +0000 UTC m=+0.052687845 container create 52c9bd58771e2667f01a23fb95c6ac77c7c6ee5b56583ea6df308e0bc59466e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nash, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:40:01 compute-0 systemd[1]: Started libpod-conmon-52c9bd58771e2667f01a23fb95c6ac77c7c6ee5b56583ea6df308e0bc59466e9.scope.
Jan 23 18:40:01 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:40:01 compute-0 podman[106962]: 2026-01-23 18:40:01.52663197 +0000 UTC m=+0.032327387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:40:01 compute-0 podman[106962]: 2026-01-23 18:40:01.626028717 +0000 UTC m=+0.131724144 container init 52c9bd58771e2667f01a23fb95c6ac77c7c6ee5b56583ea6df308e0bc59466e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:40:01 compute-0 podman[106962]: 2026-01-23 18:40:01.637846091 +0000 UTC m=+0.143541488 container start 52c9bd58771e2667f01a23fb95c6ac77c7c6ee5b56583ea6df308e0bc59466e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nash, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:40:01 compute-0 podman[106962]: 2026-01-23 18:40:01.642043745 +0000 UTC m=+0.147739142 container attach 52c9bd58771e2667f01a23fb95c6ac77c7c6ee5b56583ea6df308e0bc59466e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nash, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:40:01 compute-0 pedantic_nash[107001]: 167 167
Jan 23 18:40:01 compute-0 systemd[1]: libpod-52c9bd58771e2667f01a23fb95c6ac77c7c6ee5b56583ea6df308e0bc59466e9.scope: Deactivated successfully.
Jan 23 18:40:01 compute-0 podman[106962]: 2026-01-23 18:40:01.643987874 +0000 UTC m=+0.149683281 container died 52c9bd58771e2667f01a23fb95c6ac77c7c6ee5b56583ea6df308e0bc59466e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 23 18:40:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-6236b8976f8451b2e97d8b973c07f1ba402304a91aba96b97cd6a1626eda06ea-merged.mount: Deactivated successfully.
Jan 23 18:40:01 compute-0 podman[106962]: 2026-01-23 18:40:01.685132179 +0000 UTC m=+0.190827576 container remove 52c9bd58771e2667f01a23fb95c6ac77c7c6ee5b56583ea6df308e0bc59466e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:40:01 compute-0 systemd[1]: libpod-conmon-52c9bd58771e2667f01a23fb95c6ac77c7c6ee5b56583ea6df308e0bc59466e9.scope: Deactivated successfully.
Jan 23 18:40:01 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 23 18:40:01 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 23 18:40:01 compute-0 podman[107053]: 2026-01-23 18:40:01.846256654 +0000 UTC m=+0.042970861 container create 0ed64cb52bfd8d14586638528351349c1abc2754c9e5140fad827f02b1c2c9d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 23 18:40:01 compute-0 systemd[1]: Started libpod-conmon-0ed64cb52bfd8d14586638528351349c1abc2754c9e5140fad827f02b1c2c9d1.scope.
Jan 23 18:40:01 compute-0 podman[107053]: 2026-01-23 18:40:01.826770198 +0000 UTC m=+0.023484435 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:40:01 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cf59ec7364d31c56a02183fd1deafc147a4d0ac81dcb9f883a1950afc8e916/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cf59ec7364d31c56a02183fd1deafc147a4d0ac81dcb9f883a1950afc8e916/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cf59ec7364d31c56a02183fd1deafc147a4d0ac81dcb9f883a1950afc8e916/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cf59ec7364d31c56a02183fd1deafc147a4d0ac81dcb9f883a1950afc8e916/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:40:01 compute-0 podman[107053]: 2026-01-23 18:40:01.943948958 +0000 UTC m=+0.140663615 container init 0ed64cb52bfd8d14586638528351349c1abc2754c9e5140fad827f02b1c2c9d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chebyshev, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:40:01 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:01 compute-0 podman[107053]: 2026-01-23 18:40:01.951780954 +0000 UTC m=+0.148495141 container start 0ed64cb52bfd8d14586638528351349c1abc2754c9e5140fad827f02b1c2c9d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 18:40:01 compute-0 podman[107053]: 2026-01-23 18:40:01.955766204 +0000 UTC m=+0.152480431 container attach 0ed64cb52bfd8d14586638528351349c1abc2754c9e5140fad827f02b1c2c9d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 23 18:40:01 compute-0 ceph-mon[75097]: 8.18 scrub starts
Jan 23 18:40:01 compute-0 ceph-mon[75097]: 8.18 scrub ok
Jan 23 18:40:01 compute-0 ceph-mon[75097]: pgmap v274: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:01 compute-0 ceph-mon[75097]: 10.2 scrub starts
Jan 23 18:40:01 compute-0 ceph-mon[75097]: 10.2 scrub ok
Jan 23 18:40:02 compute-0 sudo[107147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gguijgtrrhdnlmmwrgboqyobddazjagz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193601.5809555-31-116039111624453/AnsiballZ_getent.py'
Jan 23 18:40:02 compute-0 sudo[107147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:02 compute-0 python3.9[107149]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 23 18:40:02 compute-0 sudo[107147]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:02 compute-0 lvm[107318]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:40:02 compute-0 lvm[107310]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:40:02 compute-0 lvm[107318]: VG ceph_vg1 finished
Jan 23 18:40:02 compute-0 lvm[107310]: VG ceph_vg0 finished
Jan 23 18:40:02 compute-0 lvm[107326]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:40:02 compute-0 lvm[107326]: VG ceph_vg2 finished
Jan 23 18:40:02 compute-0 relaxed_chebyshev[107113]: {}
Jan 23 18:40:02 compute-0 sudo[107379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjolontmilalztuincgdsktxokyismhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193602.505304-43-53459078683755/AnsiballZ_setup.py'
Jan 23 18:40:02 compute-0 sudo[107379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:02 compute-0 systemd[1]: libpod-0ed64cb52bfd8d14586638528351349c1abc2754c9e5140fad827f02b1c2c9d1.scope: Deactivated successfully.
Jan 23 18:40:02 compute-0 systemd[1]: libpod-0ed64cb52bfd8d14586638528351349c1abc2754c9e5140fad827f02b1c2c9d1.scope: Consumed 1.352s CPU time.
Jan 23 18:40:02 compute-0 podman[107053]: 2026-01-23 18:40:02.798692887 +0000 UTC m=+0.995407104 container died 0ed64cb52bfd8d14586638528351349c1abc2754c9e5140fad827f02b1c2c9d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:40:02 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 23 18:40:02 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 23 18:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6cf59ec7364d31c56a02183fd1deafc147a4d0ac81dcb9f883a1950afc8e916-merged.mount: Deactivated successfully.
Jan 23 18:40:03 compute-0 python3.9[107381]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:40:03 compute-0 podman[107053]: 2026-01-23 18:40:03.138279049 +0000 UTC m=+1.334993246 container remove 0ed64cb52bfd8d14586638528351349c1abc2754c9e5140fad827f02b1c2c9d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 18:40:03 compute-0 ceph-mon[75097]: 10.9 scrub starts
Jan 23 18:40:03 compute-0 ceph-mon[75097]: 10.9 scrub ok
Jan 23 18:40:03 compute-0 ceph-mon[75097]: pgmap v275: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:03 compute-0 ceph-mon[75097]: 2.3 scrub starts
Jan 23 18:40:03 compute-0 ceph-mon[75097]: 2.3 scrub ok
Jan 23 18:40:03 compute-0 systemd[1]: libpod-conmon-0ed64cb52bfd8d14586638528351349c1abc2754c9e5140fad827f02b1c2c9d1.scope: Deactivated successfully.
Jan 23 18:40:03 compute-0 sudo[106901]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:40:03 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:40:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:40:03 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:40:03 compute-0 sudo[107379]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:03 compute-0 sudo[107403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:40:03 compute-0 sudo[107403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:40:03 compute-0 sudo[107403]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:03 compute-0 sudo[107501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qplgopiukptpofecvzkpbslrhuoufjoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193602.505304-43-53459078683755/AnsiballZ_dnf.py'
Jan 23 18:40:03 compute-0 sudo[107501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:03 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 23 18:40:03 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 23 18:40:03 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:03 compute-0 python3.9[107503]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 18:40:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:40:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:40:04 compute-0 ceph-mon[75097]: 10.b scrub starts
Jan 23 18:40:04 compute-0 ceph-mon[75097]: 10.b scrub ok
Jan 23 18:40:05 compute-0 sudo[107501]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:05 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 23 18:40:05 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 23 18:40:05 compute-0 ceph-mon[75097]: pgmap v276: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:05 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 23 18:40:05 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 23 18:40:05 compute-0 sudo[107654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwqrkougbrqdchzxggnvntjwmqfepfmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193605.5058439-57-134753977697977/AnsiballZ_dnf.py'
Jan 23 18:40:05 compute-0 sudo[107654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:40:05 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:06 compute-0 python3.9[107656]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:40:06 compute-0 ceph-mon[75097]: 10.15 scrub starts
Jan 23 18:40:06 compute-0 ceph-mon[75097]: 10.15 scrub ok
Jan 23 18:40:06 compute-0 ceph-mon[75097]: 5.9 scrub starts
Jan 23 18:40:06 compute-0 ceph-mon[75097]: 5.9 scrub ok
Jan 23 18:40:06 compute-0 ceph-mon[75097]: pgmap v277: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:06 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 23 18:40:06 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 23 18:40:07 compute-0 sudo[107654]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:07 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 23 18:40:07 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 23 18:40:07 compute-0 ceph-mon[75097]: 2.d scrub starts
Jan 23 18:40:07 compute-0 ceph-mon[75097]: 2.d scrub ok
Jan 23 18:40:07 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:08 compute-0 sudo[107807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nepmdungdcvjqfmeaasokupinimskalk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193607.4829683-65-82352875796332/AnsiballZ_systemd.py'
Jan 23 18:40:08 compute-0 sudo[107807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:08 compute-0 python3.9[107809]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 18:40:08 compute-0 ceph-mon[75097]: 8.6 scrub starts
Jan 23 18:40:08 compute-0 ceph-mon[75097]: 8.6 scrub ok
Jan 23 18:40:08 compute-0 ceph-mon[75097]: pgmap v278: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:08 compute-0 sudo[107807]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 23 18:40:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 23 18:40:09 compute-0 python3.9[107962]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:40:09 compute-0 ceph-mon[75097]: 7.1c scrub starts
Jan 23 18:40:09 compute-0 ceph-mon[75097]: 7.1c scrub ok
Jan 23 18:40:09 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 23 18:40:09 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 23 18:40:09 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:10 compute-0 sudo[108112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uohhxfmhbusecvwyubshofynlkjscwva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193609.8051035-83-133131538991708/AnsiballZ_sefcontext.py'
Jan 23 18:40:10 compute-0 sudo[108112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:10 compute-0 python3.9[108114]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 23 18:40:10 compute-0 sudo[108112]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:10 compute-0 ceph-mon[75097]: 5.16 scrub starts
Jan 23 18:40:10 compute-0 ceph-mon[75097]: 5.16 scrub ok
Jan 23 18:40:10 compute-0 ceph-mon[75097]: pgmap v279: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:40:11 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 23 18:40:11 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 23 18:40:11 compute-0 python3.9[108264]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:40:11 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:12 compute-0 sudo[108420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcwcnqqkggjcwsxhqlchgijuvbrrczba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193612.0356483-101-116074978580570/AnsiballZ_dnf.py'
Jan 23 18:40:12 compute-0 sudo[108420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:12 compute-0 python3.9[108422]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:40:12 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 23 18:40:12 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 23 18:40:13 compute-0 ceph-mon[75097]: 8.f scrub starts
Jan 23 18:40:13 compute-0 ceph-mon[75097]: 8.f scrub ok
Jan 23 18:40:13 compute-0 ceph-mon[75097]: pgmap v280: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:13 compute-0 ceph-mon[75097]: 10.6 scrub starts
Jan 23 18:40:13 compute-0 ceph-mon[75097]: 10.6 scrub ok
Jan 23 18:40:13 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 23 18:40:13 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 23 18:40:13 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 23 18:40:13 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 23 18:40:13 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:14 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 23 18:40:14 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 23 18:40:14 compute-0 ceph-mon[75097]: 3.16 scrub starts
Jan 23 18:40:14 compute-0 ceph-mon[75097]: 3.16 scrub ok
Jan 23 18:40:14 compute-0 sudo[108420]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:15 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 23 18:40:15 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 23 18:40:15 compute-0 sudo[108573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evposuewpdouioudpwhvrrrmgtvvokvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193615.0196755-109-145845781090279/AnsiballZ_command.py'
Jan 23 18:40:15 compute-0 sudo[108573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:15 compute-0 python3.9[108575]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:40:15 compute-0 ceph-mon[75097]: 10.d scrub starts
Jan 23 18:40:15 compute-0 ceph-mon[75097]: 10.d scrub ok
Jan 23 18:40:15 compute-0 ceph-mon[75097]: pgmap v281: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:15 compute-0 ceph-mon[75097]: 7.11 scrub starts
Jan 23 18:40:15 compute-0 ceph-mon[75097]: 7.11 scrub ok
Jan 23 18:40:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:40:15 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:16 compute-0 sudo[108573]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:16 compute-0 ceph-mon[75097]: 10.e scrub starts
Jan 23 18:40:16 compute-0 ceph-mon[75097]: 10.e scrub ok
Jan 23 18:40:16 compute-0 ceph-mon[75097]: pgmap v282: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:17 compute-0 sudo[108860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eturdxrywcmdaogzozuhvhpkgimvxiuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193616.6032605-117-145723692866514/AnsiballZ_file.py'
Jan 23 18:40:17 compute-0 sudo[108860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:17 compute-0 python3.9[108862]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 23 18:40:17 compute-0 sudo[108860]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:17 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:18 compute-0 python3.9[109012]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:40:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 23 18:40:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 23 18:40:18 compute-0 sudo[109164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wefjkjcsqqbcfggehkkokkwplbkgzraa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193618.3728013-133-63690473306853/AnsiballZ_dnf.py'
Jan 23 18:40:18 compute-0 sudo[109164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:18 compute-0 python3.9[109166]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:40:19 compute-0 ceph-mon[75097]: pgmap v283: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:19 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 23 18:40:19 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 23 18:40:19 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:20 compute-0 ceph-mon[75097]: 8.1d scrub starts
Jan 23 18:40:20 compute-0 ceph-mon[75097]: 8.1d scrub ok
Jan 23 18:40:20 compute-0 sudo[109164]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:20 compute-0 sudo[109317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aebzxfbpbpbmmqmwdkmfewmbyxazzrlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193620.362367-142-146948366248372/AnsiballZ_dnf.py'
Jan 23 18:40:20 compute-0 sudo[109317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:20 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 23 18:40:20 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 23 18:40:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:40:20 compute-0 python3.9[109319]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:40:21 compute-0 ceph-mon[75097]: 6.a scrub starts
Jan 23 18:40:21 compute-0 ceph-mon[75097]: 6.a scrub ok
Jan 23 18:40:21 compute-0 ceph-mon[75097]: pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:21 compute-0 ceph-mon[75097]: 10.19 scrub starts
Jan 23 18:40:21 compute-0 ceph-mon[75097]: 10.19 scrub ok
Jan 23 18:40:21 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 23 18:40:21 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 23 18:40:21 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 23 18:40:21 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 23 18:40:21 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:22 compute-0 ceph-mon[75097]: 10.1a scrub starts
Jan 23 18:40:22 compute-0 ceph-mon[75097]: 10.1a scrub ok
Jan 23 18:40:22 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 23 18:40:22 compute-0 sudo[109317]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:22 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 23 18:40:23 compute-0 sudo[109470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtzcqujtuxflbdnffkvrhawoeqasxroj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193622.7627041-154-131468159466998/AnsiballZ_stat.py'
Jan 23 18:40:23 compute-0 sudo[109470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:23 compute-0 python3.9[109472]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:40:23 compute-0 sudo[109470]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:23 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 23 18:40:23 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 23 18:40:23 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:23 compute-0 sudo[109624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhwttjxfkdsdrkbkxhuqysgmdpquqgil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193623.4346364-162-244695302676947/AnsiballZ_slurp.py'
Jan 23 18:40:23 compute-0 sudo[109624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:24 compute-0 python3.9[109626]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 23 18:40:24 compute-0 sudo[109624]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:24 compute-0 ceph-mon[75097]: 6.5 scrub starts
Jan 23 18:40:24 compute-0 ceph-mon[75097]: 6.5 scrub ok
Jan 23 18:40:24 compute-0 ceph-mon[75097]: pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:24 compute-0 ceph-mon[75097]: 6.9 scrub starts
Jan 23 18:40:24 compute-0 ceph-mon[75097]: 6.9 scrub ok
Jan 23 18:40:24 compute-0 sshd-session[106628]: Connection closed by 192.168.122.30 port 37264
Jan 23 18:40:24 compute-0 sshd-session[106625]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:40:24 compute-0 systemd-logind[792]: Session 35 logged out. Waiting for processes to exit.
Jan 23 18:40:24 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Jan 23 18:40:24 compute-0 systemd[1]: session-35.scope: Consumed 18.026s CPU time.
Jan 23 18:40:24 compute-0 systemd-logind[792]: Removed session 35.
Jan 23 18:40:25 compute-0 ceph-mon[75097]: 2.15 scrub starts
Jan 23 18:40:25 compute-0 ceph-mon[75097]: 2.15 scrub ok
Jan 23 18:40:25 compute-0 ceph-mon[75097]: pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:25 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 23 18:40:25 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 23 18:40:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:40:25 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:26 compute-0 ceph-mon[75097]: 6.7 scrub starts
Jan 23 18:40:26 compute-0 ceph-mon[75097]: 6.7 scrub ok
Jan 23 18:40:26 compute-0 ceph-mon[75097]: pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:26 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 23 18:40:26 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 23 18:40:26 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 23 18:40:26 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 23 18:40:27 compute-0 ceph-mon[75097]: 6.3 scrub starts
Jan 23 18:40:27 compute-0 ceph-mon[75097]: 6.3 scrub ok
Jan 23 18:40:27 compute-0 ceph-mon[75097]: 5.12 scrub starts
Jan 23 18:40:27 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:28 compute-0 ceph-mon[75097]: 5.12 scrub ok
Jan 23 18:40:28 compute-0 ceph-mon[75097]: pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:28 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 23 18:40:28 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 23 18:40:28 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 23 18:40:28 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 23 18:40:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:40:28
Jan 23 18:40:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:40:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:40:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'backups', 'images', 'cephfs.cephfs.meta', 'vms', '.mgr', 'volumes', '.rgw.root', 'default.rgw.meta']
Jan 23 18:40:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:40:29 compute-0 ceph-mon[75097]: 6.0 scrub starts
Jan 23 18:40:29 compute-0 ceph-mon[75097]: 6.0 scrub ok
Jan 23 18:40:29 compute-0 ceph-mon[75097]: 2.17 scrub starts
Jan 23 18:40:29 compute-0 ceph-mon[75097]: 2.17 scrub ok
Jan 23 18:40:29 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:40:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:40:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:40:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:40:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:40:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:40:30 compute-0 sshd-session[109651]: Accepted publickey for zuul from 192.168.122.30 port 39066 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:40:30 compute-0 systemd-logind[792]: New session 36 of user zuul.
Jan 23 18:40:30 compute-0 systemd[1]: Started Session 36 of User zuul.
Jan 23 18:40:30 compute-0 sshd-session[109651]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:40:30 compute-0 ceph-mon[75097]: pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:30 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 23 18:40:30 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 23 18:40:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:40:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:40:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:40:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:40:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:40:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:40:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:40:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:40:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:40:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:40:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:40:31 compute-0 python3.9[109804]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:40:31 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 23 18:40:31 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 23 18:40:31 compute-0 ceph-mon[75097]: 11.f scrub starts
Jan 23 18:40:31 compute-0 ceph-mon[75097]: 11.f scrub ok
Jan 23 18:40:31 compute-0 ceph-mon[75097]: 8.1b scrub starts
Jan 23 18:40:31 compute-0 ceph-mon[75097]: 8.1b scrub ok
Jan 23 18:40:31 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:32 compute-0 python3.9[109958]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:40:32 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 23 18:40:32 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 23 18:40:32 compute-0 ceph-mon[75097]: pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:32 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 23 18:40:32 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 23 18:40:33 compute-0 python3.9[110151]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:40:33 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 23 18:40:33 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 23 18:40:33 compute-0 ceph-mon[75097]: 3.18 scrub starts
Jan 23 18:40:33 compute-0 ceph-mon[75097]: 3.18 scrub ok
Jan 23 18:40:33 compute-0 ceph-mon[75097]: 11.1 scrub starts
Jan 23 18:40:33 compute-0 ceph-mon[75097]: 11.1 scrub ok
Jan 23 18:40:33 compute-0 sshd-session[109654]: Connection closed by 192.168.122.30 port 39066
Jan 23 18:40:33 compute-0 sshd-session[109651]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:40:33 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Jan 23 18:40:33 compute-0 systemd[1]: session-36.scope: Consumed 2.278s CPU time.
Jan 23 18:40:33 compute-0 systemd-logind[792]: Session 36 logged out. Waiting for processes to exit.
Jan 23 18:40:33 compute-0 systemd-logind[792]: Removed session 36.
Jan 23 18:40:33 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 23 18:40:33 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 23 18:40:33 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:34 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 23 18:40:34 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 23 18:40:34 compute-0 ceph-mon[75097]: 6.8 scrub starts
Jan 23 18:40:34 compute-0 ceph-mon[75097]: 6.8 scrub ok
Jan 23 18:40:34 compute-0 ceph-mon[75097]: 5.13 scrub starts
Jan 23 18:40:34 compute-0 ceph-mon[75097]: 5.13 scrub ok
Jan 23 18:40:34 compute-0 ceph-mon[75097]: pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:34 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 23 18:40:34 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 23 18:40:35 compute-0 ceph-mon[75097]: 6.f scrub starts
Jan 23 18:40:35 compute-0 ceph-mon[75097]: 6.f scrub ok
Jan 23 18:40:35 compute-0 ceph-mon[75097]: 5.11 scrub starts
Jan 23 18:40:35 compute-0 ceph-mon[75097]: 5.11 scrub ok
Jan 23 18:40:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:40:35 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:36 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 23 18:40:36 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 23 18:40:36 compute-0 ceph-mon[75097]: pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:37 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.e scrub starts
Jan 23 18:40:37 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.e scrub ok
Jan 23 18:40:37 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 23 18:40:37 compute-0 ceph-mon[75097]: 5.f scrub starts
Jan 23 18:40:37 compute-0 ceph-mon[75097]: 5.f scrub ok
Jan 23 18:40:37 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 23 18:40:37 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:38 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 23 18:40:38 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 23 18:40:38 compute-0 ceph-mon[75097]: 11.e scrub starts
Jan 23 18:40:38 compute-0 ceph-mon[75097]: 11.e scrub ok
Jan 23 18:40:38 compute-0 ceph-mon[75097]: 10.14 scrub starts
Jan 23 18:40:38 compute-0 ceph-mon[75097]: 10.14 scrub ok
Jan 23 18:40:38 compute-0 ceph-mon[75097]: pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:39 compute-0 sshd-session[110177]: Accepted publickey for zuul from 192.168.122.30 port 42926 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:40:39 compute-0 systemd-logind[792]: New session 37 of user zuul.
Jan 23 18:40:39 compute-0 systemd[1]: Started Session 37 of User zuul.
Jan 23 18:40:39 compute-0 sshd-session[110177]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:40:39 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 23 18:40:39 compute-0 ceph-mon[75097]: 11.11 scrub starts
Jan 23 18:40:39 compute-0 ceph-mon[75097]: 11.11 scrub ok
Jan 23 18:40:39 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 23 18:40:39 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:40:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:40:40 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 23 18:40:40 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 23 18:40:40 compute-0 python3.9[110330]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:40:40 compute-0 ceph-mon[75097]: 10.12 scrub starts
Jan 23 18:40:40 compute-0 ceph-mon[75097]: 10.12 scrub ok
Jan 23 18:40:40 compute-0 ceph-mon[75097]: pgmap v294: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:40:41 compute-0 python3.9[110484]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:40:41 compute-0 ceph-mon[75097]: 11.1a scrub starts
Jan 23 18:40:41 compute-0 ceph-mon[75097]: 11.1a scrub ok
Jan 23 18:40:41 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:42 compute-0 sudo[110638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzfakuvbxcnmzhlzyffgmfevprnsmvcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193641.8167274-35-59393432985400/AnsiballZ_setup.py'
Jan 23 18:40:42 compute-0 sudo[110638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:42 compute-0 python3.9[110640]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:40:42 compute-0 sudo[110638]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:42 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 23 18:40:42 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 23 18:40:43 compute-0 sudo[110722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmadmkgiblkiryoqvyayvfqjsqmgzdrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193641.8167274-35-59393432985400/AnsiballZ_dnf.py'
Jan 23 18:40:43 compute-0 sudo[110722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:43 compute-0 ceph-mon[75097]: pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:43 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 23 18:40:43 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 23 18:40:43 compute-0 python3.9[110724]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:40:43 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:44 compute-0 ceph-mon[75097]: 6.2 scrub starts
Jan 23 18:40:44 compute-0 ceph-mon[75097]: 6.2 scrub ok
Jan 23 18:40:44 compute-0 ceph-mon[75097]: 11.1b scrub starts
Jan 23 18:40:44 compute-0 ceph-mon[75097]: 11.1b scrub ok
Jan 23 18:40:44 compute-0 sudo[110722]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:45 compute-0 sudo[110875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrvupmtwibpibhdeyrlcijulrqzkplbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193644.881711-47-110252692156948/AnsiballZ_setup.py'
Jan 23 18:40:45 compute-0 sudo[110875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:45 compute-0 python3.9[110877]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:40:45 compute-0 ceph-mon[75097]: pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:45 compute-0 sudo[110875]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:40:45 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:46 compute-0 ceph-mon[75097]: pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:46 compute-0 sudo[111070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcudbhigbrxkpnrcamorrdxzwglqodmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193646.0920618-58-184053732588179/AnsiballZ_file.py'
Jan 23 18:40:46 compute-0 sudo[111070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:46 compute-0 python3.9[111072]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:40:46 compute-0 sudo[111070]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:47 compute-0 sudo[111222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqkgufnkhxuqidnyebbvjxfsljgtoeep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193646.888774-66-98189936518418/AnsiballZ_command.py'
Jan 23 18:40:47 compute-0 sudo[111222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:47 compute-0 python3.9[111224]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:40:47 compute-0 sudo[111222]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:47 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 23 18:40:47 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 23 18:40:47 compute-0 ceph-mon[75097]: 6.6 scrub starts
Jan 23 18:40:47 compute-0 ceph-mon[75097]: 6.6 scrub ok
Jan 23 18:40:47 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:48 compute-0 sudo[111386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncxalrulyyorcyzqykyakbmqxblaptok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193647.811231-74-135578650392129/AnsiballZ_stat.py'
Jan 23 18:40:48 compute-0 sudo[111386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:48 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Jan 23 18:40:48 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Jan 23 18:40:48 compute-0 python3.9[111388]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:40:48 compute-0 sudo[111386]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:48 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 23 18:40:48 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 23 18:40:48 compute-0 sudo[111464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dygpicjfswqxgeuqiffwxjkinqocnxlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193647.811231-74-135578650392129/AnsiballZ_file.py'
Jan 23 18:40:48 compute-0 sudo[111464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:48 compute-0 ceph-mon[75097]: pgmap v298: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:48 compute-0 ceph-mon[75097]: 6.4 scrub starts
Jan 23 18:40:48 compute-0 ceph-mon[75097]: 6.4 scrub ok
Jan 23 18:40:49 compute-0 python3.9[111466]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:40:49 compute-0 sudo[111464]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:49 compute-0 sudo[111616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slfuufbafzfrawijwopqzanbybuguris ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193649.1850398-86-279230232411867/AnsiballZ_stat.py'
Jan 23 18:40:49 compute-0 sudo[111616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:49 compute-0 python3.9[111618]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:40:49 compute-0 sudo[111616]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:49 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 23 18:40:49 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 23 18:40:49 compute-0 sudo[111694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fktqtbajasmekmbhbgnrlzseefzmwzky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193649.1850398-86-279230232411867/AnsiballZ_file.py'
Jan 23 18:40:49 compute-0 sudo[111694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:49 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:49 compute-0 ceph-mon[75097]: 11.19 scrub starts
Jan 23 18:40:49 compute-0 ceph-mon[75097]: 11.19 scrub ok
Jan 23 18:40:49 compute-0 ceph-mon[75097]: 6.d scrub starts
Jan 23 18:40:49 compute-0 ceph-mon[75097]: 6.d scrub ok
Jan 23 18:40:50 compute-0 python3.9[111696]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:40:50 compute-0 sudo[111694]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:50 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 23 18:40:50 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 23 18:40:50 compute-0 sudo[111846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vglwyverkciuibgrqksocahwtuicjdlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193650.344376-99-157597561205099/AnsiballZ_ini_file.py'
Jan 23 18:40:50 compute-0 sudo[111846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:40:51 compute-0 ceph-mon[75097]: pgmap v299: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:51 compute-0 python3.9[111848]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:40:51 compute-0 sudo[111846]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:51 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 23 18:40:51 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 23 18:40:51 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 23 18:40:51 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 23 18:40:51 compute-0 sudo[111998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmdnsllzsotgotqaldkfderzpjjvhwuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193651.4125805-99-62688587635826/AnsiballZ_ini_file.py'
Jan 23 18:40:51 compute-0 sudo[111998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:51 compute-0 python3.9[112000]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:40:51 compute-0 sudo[111998]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:51 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:52 compute-0 ceph-mon[75097]: 11.4 scrub starts
Jan 23 18:40:52 compute-0 ceph-mon[75097]: 11.4 scrub ok
Jan 23 18:40:52 compute-0 ceph-mon[75097]: 11.2 scrub starts
Jan 23 18:40:52 compute-0 sudo[112150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygolwwiixjwwzjmojkttqejuqxwhjdyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193652.0681846-99-264179039671651/AnsiballZ_ini_file.py'
Jan 23 18:40:52 compute-0 sudo[112150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:52 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 23 18:40:52 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 23 18:40:52 compute-0 python3.9[112152]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:40:52 compute-0 sudo[112150]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:52 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 23 18:40:52 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 23 18:40:53 compute-0 sudo[112302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsxxxqgufaeqdinwukjyzddxwqvqcxrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193652.7001374-99-130870418987271/AnsiballZ_ini_file.py'
Jan 23 18:40:53 compute-0 sudo[112302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:53 compute-0 ceph-mon[75097]: 11.2 scrub ok
Jan 23 18:40:53 compute-0 ceph-mon[75097]: 11.14 scrub starts
Jan 23 18:40:53 compute-0 ceph-mon[75097]: 11.14 scrub ok
Jan 23 18:40:53 compute-0 ceph-mon[75097]: pgmap v300: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:53 compute-0 ceph-mon[75097]: 11.1f scrub starts
Jan 23 18:40:53 compute-0 ceph-mon[75097]: 6.e scrub starts
Jan 23 18:40:53 compute-0 ceph-mon[75097]: 6.e scrub ok
Jan 23 18:40:53 compute-0 python3.9[112304]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:40:53 compute-0 sudo[112302]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:53 compute-0 sudo[112454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kensalqahtskzzsmiytvwsmmuwuhseiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193653.4596484-130-124012539936865/AnsiballZ_dnf.py'
Jan 23 18:40:53 compute-0 sudo[112454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:53 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:53 compute-0 python3.9[112456]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:40:54 compute-0 ceph-mon[75097]: 11.1f scrub ok
Jan 23 18:40:55 compute-0 ceph-mon[75097]: pgmap v301: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:55 compute-0 sudo[112454]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:55 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 23 18:40:55 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 23 18:40:55 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:55 compute-0 sudo[112607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phjzdlfafcfusxvqjdpoodoirnsowjlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193655.6937652-141-210086820997602/AnsiballZ_setup.py'
Jan 23 18:40:55 compute-0 sudo[112607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:40:56 compute-0 python3.9[112609]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:40:56 compute-0 sudo[112607]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:56 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 23 18:40:56 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 23 18:40:56 compute-0 sudo[112761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjvytdvnlvnpkyqubaqudkzcfrtwpuof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193656.4950538-149-100491039024353/AnsiballZ_stat.py'
Jan 23 18:40:56 compute-0 sudo[112761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:56 compute-0 python3.9[112763]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:40:56 compute-0 sudo[112761]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:57 compute-0 ceph-mon[75097]: 11.1e scrub starts
Jan 23 18:40:57 compute-0 ceph-mon[75097]: 11.1e scrub ok
Jan 23 18:40:57 compute-0 ceph-mon[75097]: pgmap v302: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:57 compute-0 sudo[112913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukrgumfzrkoujohywwzlckxhbfcxdwdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193657.2562206-158-188546659585955/AnsiballZ_stat.py'
Jan 23 18:40:57 compute-0 sudo[112913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:57 compute-0 python3.9[112915]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:40:57 compute-0 sudo[112913]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:57 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:58 compute-0 ceph-mon[75097]: 11.10 scrub starts
Jan 23 18:40:58 compute-0 ceph-mon[75097]: 11.10 scrub ok
Jan 23 18:40:58 compute-0 sudo[113065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhthemrwxxmsemgbankyvbmbofhmbyxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193658.1194336-168-80279194969046/AnsiballZ_command.py'
Jan 23 18:40:58 compute-0 sudo[113065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:58 compute-0 python3.9[113067]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:40:58 compute-0 sudo[113065]: pam_unix(sudo:session): session closed for user root
Jan 23 18:40:58 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 23 18:40:58 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 23 18:40:59 compute-0 ceph-mon[75097]: pgmap v303: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:40:59 compute-0 ceph-mon[75097]: 6.1 scrub starts
Jan 23 18:40:59 compute-0 ceph-mon[75097]: 6.1 scrub ok
Jan 23 18:40:59 compute-0 sudo[113218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxmlmyohzlwupqgcvfvtxeulfazgnyak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193658.897986-178-264768713160638/AnsiballZ_service_facts.py'
Jan 23 18:40:59 compute-0 sudo[113218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:40:59 compute-0 python3.9[113220]: ansible-service_facts Invoked
Jan 23 18:40:59 compute-0 network[113237]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 18:40:59 compute-0 network[113238]: 'network-scripts' will be removed from distribution in near future.
Jan 23 18:40:59 compute-0 network[113239]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 18:40:59 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:41:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:41:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:41:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:41:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:41:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:41:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:41:01 compute-0 ceph-mon[75097]: pgmap v304: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:01 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:02 compute-0 sudo[113218]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:03 compute-0 ceph-mon[75097]: pgmap v305: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:03 compute-0 sudo[113374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:41:03 compute-0 sudo[113374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:41:03 compute-0 sudo[113374]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:03 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Jan 23 18:41:03 compute-0 sudo[113399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:41:03 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Jan 23 18:41:03 compute-0 sudo[113399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:41:03 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:04 compute-0 sudo[113590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuulrtkkohlwnfyziytsetxqkfcxezge ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769193663.6414702-193-123132958097020/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769193663.6414702-193-123132958097020/args'
Jan 23 18:41:04 compute-0 sudo[113590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:04 compute-0 sudo[113590]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:04 compute-0 sudo[113399]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:41:04 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:41:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:41:04 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:41:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:41:04 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:41:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:41:04 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:41:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:41:04 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:41:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:41:04 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:41:04 compute-0 sudo[113646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:41:04 compute-0 sudo[113646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:41:04 compute-0 sudo[113646]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:04 compute-0 ceph-mon[75097]: 11.17 scrub starts
Jan 23 18:41:04 compute-0 ceph-mon[75097]: 11.17 scrub ok
Jan 23 18:41:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:41:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:41:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:41:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:41:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:41:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:41:04 compute-0 sudo[113671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:41:04 compute-0 sudo[113671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:41:04 compute-0 podman[113783]: 2026-01-23 18:41:04.599280917 +0000 UTC m=+0.049116915 container create 299ca5fba18aa90f7ad4f15f3b937058aa00019d67c3cd5de241032d111e5485 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_payne, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 23 18:41:04 compute-0 systemd[1]: Started libpod-conmon-299ca5fba18aa90f7ad4f15f3b937058aa00019d67c3cd5de241032d111e5485.scope.
Jan 23 18:41:04 compute-0 podman[113783]: 2026-01-23 18:41:04.574591222 +0000 UTC m=+0.024427230 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:41:04 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:41:04 compute-0 sudo[113851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvvmiigkihrgwnnayuirilkfepzwjufr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193664.3661103-204-179116492685771/AnsiballZ_dnf.py'
Jan 23 18:41:04 compute-0 sudo[113851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:04 compute-0 podman[113783]: 2026-01-23 18:41:04.705061464 +0000 UTC m=+0.154897502 container init 299ca5fba18aa90f7ad4f15f3b937058aa00019d67c3cd5de241032d111e5485 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_payne, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 18:41:04 compute-0 podman[113783]: 2026-01-23 18:41:04.716505357 +0000 UTC m=+0.166341315 container start 299ca5fba18aa90f7ad4f15f3b937058aa00019d67c3cd5de241032d111e5485 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_payne, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:41:04 compute-0 podman[113783]: 2026-01-23 18:41:04.721020277 +0000 UTC m=+0.170856255 container attach 299ca5fba18aa90f7ad4f15f3b937058aa00019d67c3cd5de241032d111e5485 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:41:04 compute-0 cranky_payne[113849]: 167 167
Jan 23 18:41:04 compute-0 systemd[1]: libpod-299ca5fba18aa90f7ad4f15f3b937058aa00019d67c3cd5de241032d111e5485.scope: Deactivated successfully.
Jan 23 18:41:04 compute-0 podman[113783]: 2026-01-23 18:41:04.724158361 +0000 UTC m=+0.173994349 container died 299ca5fba18aa90f7ad4f15f3b937058aa00019d67c3cd5de241032d111e5485 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_payne, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:41:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae2d9190d1e0121a5db40bf0d09f6d9864e470a2d06fe00ed8ffe9bbd66a2dc8-merged.mount: Deactivated successfully.
Jan 23 18:41:04 compute-0 podman[113783]: 2026-01-23 18:41:04.781777459 +0000 UTC m=+0.231613427 container remove 299ca5fba18aa90f7ad4f15f3b937058aa00019d67c3cd5de241032d111e5485 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:41:04 compute-0 systemd[1]: libpod-conmon-299ca5fba18aa90f7ad4f15f3b937058aa00019d67c3cd5de241032d111e5485.scope: Deactivated successfully.
Jan 23 18:41:04 compute-0 python3.9[113854]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:41:04 compute-0 podman[113877]: 2026-01-23 18:41:04.960940133 +0000 UTC m=+0.061817711 container create 33205ea76f6c2ff9f00ea880f459609e67e59a362fe634ed012d701187eaa94b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_archimedes, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:41:05 compute-0 podman[113877]: 2026-01-23 18:41:04.928035539 +0000 UTC m=+0.028913197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:41:05 compute-0 systemd[1]: Started libpod-conmon-33205ea76f6c2ff9f00ea880f459609e67e59a362fe634ed012d701187eaa94b.scope.
Jan 23 18:41:05 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:41:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/204eecd18712006a09582ccdbbc2f5ac35984ddcfbea450dc5eeb272d320d60d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:41:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/204eecd18712006a09582ccdbbc2f5ac35984ddcfbea450dc5eeb272d320d60d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:41:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/204eecd18712006a09582ccdbbc2f5ac35984ddcfbea450dc5eeb272d320d60d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:41:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/204eecd18712006a09582ccdbbc2f5ac35984ddcfbea450dc5eeb272d320d60d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:41:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/204eecd18712006a09582ccdbbc2f5ac35984ddcfbea450dc5eeb272d320d60d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:41:05 compute-0 podman[113877]: 2026-01-23 18:41:05.073512339 +0000 UTC m=+0.174389977 container init 33205ea76f6c2ff9f00ea880f459609e67e59a362fe634ed012d701187eaa94b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_archimedes, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 18:41:05 compute-0 podman[113877]: 2026-01-23 18:41:05.081127652 +0000 UTC m=+0.182005230 container start 33205ea76f6c2ff9f00ea880f459609e67e59a362fe634ed012d701187eaa94b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_archimedes, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 23 18:41:05 compute-0 podman[113877]: 2026-01-23 18:41:05.085316673 +0000 UTC m=+0.186194331 container attach 33205ea76f6c2ff9f00ea880f459609e67e59a362fe634ed012d701187eaa94b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:41:05 compute-0 ceph-mon[75097]: pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:05 compute-0 priceless_archimedes[113895]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:41:05 compute-0 priceless_archimedes[113895]: --> All data devices are unavailable
Jan 23 18:41:05 compute-0 systemd[1]: libpod-33205ea76f6c2ff9f00ea880f459609e67e59a362fe634ed012d701187eaa94b.scope: Deactivated successfully.
Jan 23 18:41:05 compute-0 podman[113915]: 2026-01-23 18:41:05.620511303 +0000 UTC m=+0.024784919 container died 33205ea76f6c2ff9f00ea880f459609e67e59a362fe634ed012d701187eaa94b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 18:41:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-204eecd18712006a09582ccdbbc2f5ac35984ddcfbea450dc5eeb272d320d60d-merged.mount: Deactivated successfully.
Jan 23 18:41:05 compute-0 podman[113915]: 2026-01-23 18:41:05.675928192 +0000 UTC m=+0.080201768 container remove 33205ea76f6c2ff9f00ea880f459609e67e59a362fe634ed012d701187eaa94b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 23 18:41:05 compute-0 systemd[1]: libpod-conmon-33205ea76f6c2ff9f00ea880f459609e67e59a362fe634ed012d701187eaa94b.scope: Deactivated successfully.
Jan 23 18:41:05 compute-0 sudo[113671]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:05 compute-0 sudo[113929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:41:05 compute-0 sudo[113929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:41:05 compute-0 sudo[113929]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:05 compute-0 sudo[113954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:41:05 compute-0 sudo[113954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:41:05 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:41:06 compute-0 sudo[113851]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:06 compute-0 podman[113992]: 2026-01-23 18:41:06.203393728 +0000 UTC m=+0.042199341 container create b9d3f5a4af120310ee6b13a753a2ff33589868c47cec41200619d92b9e208cf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hopper, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:41:06 compute-0 systemd[1]: Started libpod-conmon-b9d3f5a4af120310ee6b13a753a2ff33589868c47cec41200619d92b9e208cf9.scope.
Jan 23 18:41:06 compute-0 ceph-mon[75097]: pgmap v307: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:06 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:41:06 compute-0 podman[113992]: 2026-01-23 18:41:06.183849049 +0000 UTC m=+0.022654682 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:41:06 compute-0 podman[113992]: 2026-01-23 18:41:06.282841536 +0000 UTC m=+0.121647199 container init b9d3f5a4af120310ee6b13a753a2ff33589868c47cec41200619d92b9e208cf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 18:41:06 compute-0 podman[113992]: 2026-01-23 18:41:06.288845525 +0000 UTC m=+0.127651148 container start b9d3f5a4af120310ee6b13a753a2ff33589868c47cec41200619d92b9e208cf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hopper, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:41:06 compute-0 podman[113992]: 2026-01-23 18:41:06.293090257 +0000 UTC m=+0.131895870 container attach b9d3f5a4af120310ee6b13a753a2ff33589868c47cec41200619d92b9e208cf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 23 18:41:06 compute-0 silly_hopper[114032]: 167 167
Jan 23 18:41:06 compute-0 systemd[1]: libpod-b9d3f5a4af120310ee6b13a753a2ff33589868c47cec41200619d92b9e208cf9.scope: Deactivated successfully.
Jan 23 18:41:06 compute-0 podman[113992]: 2026-01-23 18:41:06.294651949 +0000 UTC m=+0.133457592 container died b9d3f5a4af120310ee6b13a753a2ff33589868c47cec41200619d92b9e208cf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hopper, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 23 18:41:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2283d534b7906347dc335b914eb74e90181127cf85be7eae694577b3e81c66f-merged.mount: Deactivated successfully.
Jan 23 18:41:06 compute-0 podman[113992]: 2026-01-23 18:41:06.338251165 +0000 UTC m=+0.177056778 container remove b9d3f5a4af120310ee6b13a753a2ff33589868c47cec41200619d92b9e208cf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hopper, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:41:06 compute-0 systemd[1]: libpod-conmon-b9d3f5a4af120310ee6b13a753a2ff33589868c47cec41200619d92b9e208cf9.scope: Deactivated successfully.
Jan 23 18:41:06 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 23 18:41:06 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 23 18:41:06 compute-0 podman[114080]: 2026-01-23 18:41:06.542379771 +0000 UTC m=+0.068082966 container create 6b6a2dffdc653a4c36bf4bf0a6d456e35c85c034212b8e4a87b2eb59ce5ae2ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_archimedes, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 23 18:41:06 compute-0 systemd[1]: Started libpod-conmon-6b6a2dffdc653a4c36bf4bf0a6d456e35c85c034212b8e4a87b2eb59ce5ae2ff.scope.
Jan 23 18:41:06 compute-0 podman[114080]: 2026-01-23 18:41:06.504200369 +0000 UTC m=+0.029903564 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:41:06 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:41:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be05800bac1c999f689efccafa1b0b0020c986c50190e50496db1faa4e4ce64b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:41:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be05800bac1c999f689efccafa1b0b0020c986c50190e50496db1faa4e4ce64b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:41:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be05800bac1c999f689efccafa1b0b0020c986c50190e50496db1faa4e4ce64b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:41:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be05800bac1c999f689efccafa1b0b0020c986c50190e50496db1faa4e4ce64b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:41:06 compute-0 podman[114080]: 2026-01-23 18:41:06.631029303 +0000 UTC m=+0.156732538 container init 6b6a2dffdc653a4c36bf4bf0a6d456e35c85c034212b8e4a87b2eb59ce5ae2ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:41:06 compute-0 podman[114080]: 2026-01-23 18:41:06.636665114 +0000 UTC m=+0.162368269 container start 6b6a2dffdc653a4c36bf4bf0a6d456e35c85c034212b8e4a87b2eb59ce5ae2ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_archimedes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:41:06 compute-0 podman[114080]: 2026-01-23 18:41:06.652864063 +0000 UTC m=+0.178567208 container attach 6b6a2dffdc653a4c36bf4bf0a6d456e35c85c034212b8e4a87b2eb59ce5ae2ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_archimedes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]: {
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:     "0": [
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:         {
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "devices": [
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "/dev/loop3"
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             ],
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "lv_name": "ceph_lv0",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "lv_size": "21470642176",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "name": "ceph_lv0",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "tags": {
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.cluster_name": "ceph",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.crush_device_class": "",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.encrypted": "0",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.objectstore": "bluestore",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.osd_id": "0",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.type": "block",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.vdo": "0",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.with_tpm": "0"
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             },
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "type": "block",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "vg_name": "ceph_vg0"
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:         }
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:     ],
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:     "1": [
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:         {
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "devices": [
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "/dev/loop4"
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             ],
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "lv_name": "ceph_lv1",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "lv_size": "21470642176",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "name": "ceph_lv1",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "tags": {
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.cluster_name": "ceph",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.crush_device_class": "",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.encrypted": "0",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.objectstore": "bluestore",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.osd_id": "1",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.type": "block",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.vdo": "0",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.with_tpm": "0"
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             },
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "type": "block",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "vg_name": "ceph_vg1"
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:         }
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:     ],
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:     "2": [
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:         {
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "devices": [
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "/dev/loop5"
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             ],
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "lv_name": "ceph_lv2",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "lv_size": "21470642176",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "name": "ceph_lv2",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "tags": {
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.cluster_name": "ceph",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.crush_device_class": "",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.encrypted": "0",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.objectstore": "bluestore",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.osd_id": "2",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.type": "block",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.vdo": "0",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:                 "ceph.with_tpm": "0"
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             },
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "type": "block",
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:             "vg_name": "ceph_vg2"
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:         }
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]:     ]
Jan 23 18:41:06 compute-0 beautiful_archimedes[114125]: }
Jan 23 18:41:07 compute-0 systemd[1]: libpod-6b6a2dffdc653a4c36bf4bf0a6d456e35c85c034212b8e4a87b2eb59ce5ae2ff.scope: Deactivated successfully.
Jan 23 18:41:07 compute-0 podman[114080]: 2026-01-23 18:41:07.001168884 +0000 UTC m=+0.526872049 container died 6b6a2dffdc653a4c36bf4bf0a6d456e35c85c034212b8e4a87b2eb59ce5ae2ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 23 18:41:07 compute-0 sudo[114217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icfnlcfdtdnllwdjkdtltgypxzhkpdxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193666.4691308-217-113799435295881/AnsiballZ_package_facts.py'
Jan 23 18:41:07 compute-0 sudo[114217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:07 compute-0 python3.9[114219]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 23 18:41:07 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 23 18:41:07 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 23 18:41:07 compute-0 sudo[114217]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:07 compute-0 ceph-mon[75097]: 11.8 scrub starts
Jan 23 18:41:07 compute-0 ceph-mon[75097]: 11.8 scrub ok
Jan 23 18:41:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-be05800bac1c999f689efccafa1b0b0020c986c50190e50496db1faa4e4ce64b-merged.mount: Deactivated successfully.
Jan 23 18:41:07 compute-0 podman[114080]: 2026-01-23 18:41:07.719667208 +0000 UTC m=+1.245370363 container remove 6b6a2dffdc653a4c36bf4bf0a6d456e35c85c034212b8e4a87b2eb59ce5ae2ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:41:07 compute-0 systemd[1]: libpod-conmon-6b6a2dffdc653a4c36bf4bf0a6d456e35c85c034212b8e4a87b2eb59ce5ae2ff.scope: Deactivated successfully.
Jan 23 18:41:07 compute-0 sudo[113954]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:07 compute-0 sudo[114245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:41:07 compute-0 sudo[114245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:41:07 compute-0 sudo[114245]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:07 compute-0 sudo[114270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:41:07 compute-0 sudo[114270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:41:07 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:08 compute-0 podman[114382]: 2026-01-23 18:41:08.253720097 +0000 UTC m=+0.054879917 container create a9dec07d9eb0999f61abdfe618dac9f0aa3d550f9da8c5068ed3e08afc60c67a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_jennings, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 18:41:08 compute-0 systemd[1]: Started libpod-conmon-a9dec07d9eb0999f61abdfe618dac9f0aa3d550f9da8c5068ed3e08afc60c67a.scope.
Jan 23 18:41:08 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:41:08 compute-0 podman[114382]: 2026-01-23 18:41:08.228735945 +0000 UTC m=+0.029895845 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:41:08 compute-0 podman[114382]: 2026-01-23 18:41:08.323027236 +0000 UTC m=+0.124187086 container init a9dec07d9eb0999f61abdfe618dac9f0aa3d550f9da8c5068ed3e08afc60c67a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:41:08 compute-0 podman[114382]: 2026-01-23 18:41:08.330243597 +0000 UTC m=+0.131403437 container start a9dec07d9eb0999f61abdfe618dac9f0aa3d550f9da8c5068ed3e08afc60c67a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_jennings, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 18:41:08 compute-0 podman[114382]: 2026-01-23 18:41:08.333784172 +0000 UTC m=+0.134944022 container attach a9dec07d9eb0999f61abdfe618dac9f0aa3d550f9da8c5068ed3e08afc60c67a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_jennings, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 23 18:41:08 compute-0 elegant_jennings[114422]: 167 167
Jan 23 18:41:08 compute-0 systemd[1]: libpod-a9dec07d9eb0999f61abdfe618dac9f0aa3d550f9da8c5068ed3e08afc60c67a.scope: Deactivated successfully.
Jan 23 18:41:08 compute-0 conmon[114422]: conmon a9dec07d9eb0999f61ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9dec07d9eb0999f61abdfe618dac9f0aa3d550f9da8c5068ed3e08afc60c67a.scope/container/memory.events
Jan 23 18:41:08 compute-0 podman[114382]: 2026-01-23 18:41:08.337775197 +0000 UTC m=+0.138935017 container died a9dec07d9eb0999f61abdfe618dac9f0aa3d550f9da8c5068ed3e08afc60c67a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_jennings, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 23 18:41:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-abdc47057683e4ee5c992cfcee3eaf5bb9aed2e2eba290e1ad5182fabbd6e57a-merged.mount: Deactivated successfully.
Jan 23 18:41:08 compute-0 sudo[114460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdyfugpkjaxcnbprdeipcyhkbtqzsrin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193667.979595-227-129948365131846/AnsiballZ_stat.py'
Jan 23 18:41:08 compute-0 sudo[114460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:08 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 23 18:41:08 compute-0 podman[114382]: 2026-01-23 18:41:08.378359654 +0000 UTC m=+0.179519464 container remove a9dec07d9eb0999f61abdfe618dac9f0aa3d550f9da8c5068ed3e08afc60c67a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_jennings, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 18:41:08 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 23 18:41:08 compute-0 systemd[1]: libpod-conmon-a9dec07d9eb0999f61abdfe618dac9f0aa3d550f9da8c5068ed3e08afc60c67a.scope: Deactivated successfully.
Jan 23 18:41:08 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 23 18:41:08 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 23 18:41:08 compute-0 podman[114475]: 2026-01-23 18:41:08.547345568 +0000 UTC m=+0.052427182 container create ad18aa71e58e3bb60642c90a096b8a5e4771f5fa6e49e97f01c7a5c064e8c5f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:41:08 compute-0 python3.9[114467]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:41:08 compute-0 sudo[114460]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:08 compute-0 systemd[1]: Started libpod-conmon-ad18aa71e58e3bb60642c90a096b8a5e4771f5fa6e49e97f01c7a5c064e8c5f0.scope.
Jan 23 18:41:08 compute-0 podman[114475]: 2026-01-23 18:41:08.521729448 +0000 UTC m=+0.026811092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:41:08 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/496db4b4c1169c1fa71d7a9c3f2c2e0a4f208ceb456250671cf2e6472a34ff12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/496db4b4c1169c1fa71d7a9c3f2c2e0a4f208ceb456250671cf2e6472a34ff12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/496db4b4c1169c1fa71d7a9c3f2c2e0a4f208ceb456250671cf2e6472a34ff12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/496db4b4c1169c1fa71d7a9c3f2c2e0a4f208ceb456250671cf2e6472a34ff12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:41:08 compute-0 podman[114475]: 2026-01-23 18:41:08.638147477 +0000 UTC m=+0.143229101 container init ad18aa71e58e3bb60642c90a096b8a5e4771f5fa6e49e97f01c7a5c064e8c5f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_khayyam, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 23 18:41:08 compute-0 podman[114475]: 2026-01-23 18:41:08.645159483 +0000 UTC m=+0.150241087 container start ad18aa71e58e3bb60642c90a096b8a5e4771f5fa6e49e97f01c7a5c064e8c5f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 23 18:41:08 compute-0 podman[114475]: 2026-01-23 18:41:08.648917053 +0000 UTC m=+0.153998727 container attach ad18aa71e58e3bb60642c90a096b8a5e4771f5fa6e49e97f01c7a5c064e8c5f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_khayyam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:41:08 compute-0 ceph-mon[75097]: 11.1c scrub starts
Jan 23 18:41:08 compute-0 ceph-mon[75097]: 11.1c scrub ok
Jan 23 18:41:08 compute-0 ceph-mon[75097]: pgmap v308: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:08 compute-0 ceph-mon[75097]: 11.b scrub starts
Jan 23 18:41:08 compute-0 sudo[114573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgoqhtagdsvgrxtznqbhjccgumvtkkln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193667.979595-227-129948365131846/AnsiballZ_file.py'
Jan 23 18:41:08 compute-0 sudo[114573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:09 compute-0 python3.9[114584]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:09 compute-0 sudo[114573]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:09 compute-0 lvm[114692]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:41:09 compute-0 lvm[114692]: VG ceph_vg0 finished
Jan 23 18:41:09 compute-0 lvm[114693]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:41:09 compute-0 lvm[114693]: VG ceph_vg1 finished
Jan 23 18:41:09 compute-0 lvm[114698]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:41:09 compute-0 lvm[114698]: VG ceph_vg2 finished
Jan 23 18:41:09 compute-0 agitated_khayyam[114494]: {}
Jan 23 18:41:09 compute-0 systemd[1]: libpod-ad18aa71e58e3bb60642c90a096b8a5e4771f5fa6e49e97f01c7a5c064e8c5f0.scope: Deactivated successfully.
Jan 23 18:41:09 compute-0 systemd[1]: libpod-ad18aa71e58e3bb60642c90a096b8a5e4771f5fa6e49e97f01c7a5c064e8c5f0.scope: Consumed 1.373s CPU time.
Jan 23 18:41:09 compute-0 podman[114475]: 2026-01-23 18:41:09.458066881 +0000 UTC m=+0.963148485 container died ad18aa71e58e3bb60642c90a096b8a5e4771f5fa6e49e97f01c7a5c064e8c5f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_khayyam, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:41:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-496db4b4c1169c1fa71d7a9c3f2c2e0a4f208ceb456250671cf2e6472a34ff12-merged.mount: Deactivated successfully.
Jan 23 18:41:09 compute-0 podman[114475]: 2026-01-23 18:41:09.510725158 +0000 UTC m=+1.015806752 container remove ad18aa71e58e3bb60642c90a096b8a5e4771f5fa6e49e97f01c7a5c064e8c5f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 23 18:41:09 compute-0 systemd[1]: libpod-conmon-ad18aa71e58e3bb60642c90a096b8a5e4771f5fa6e49e97f01c7a5c064e8c5f0.scope: Deactivated successfully.
Jan 23 18:41:09 compute-0 sudo[114270]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:41:09 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:41:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:41:09 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:41:09 compute-0 sudo[114827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zckfqkgezigdjixsvowmncktzvuemnee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193669.330612-239-149662613638788/AnsiballZ_stat.py'
Jan 23 18:41:09 compute-0 sudo[114827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:09 compute-0 sudo[114804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:41:09 compute-0 sudo[114804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:41:09 compute-0 sudo[114804]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:09 compute-0 ceph-mon[75097]: 11.b scrub ok
Jan 23 18:41:09 compute-0 ceph-mon[75097]: 11.6 scrub starts
Jan 23 18:41:09 compute-0 ceph-mon[75097]: 11.6 scrub ok
Jan 23 18:41:09 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:41:09 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:41:09 compute-0 python3.9[114839]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:41:09 compute-0 sudo[114827]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:09 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:10 compute-0 sudo[114917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuojaahrpylaoluhdlpvqhwmdbjbfvvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193669.330612-239-149662613638788/AnsiballZ_file.py'
Jan 23 18:41:10 compute-0 sudo[114917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:10 compute-0 python3.9[114919]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:10 compute-0 sudo[114917]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:10 compute-0 ceph-mon[75097]: pgmap v309: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:41:11 compute-0 sudo[115069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwjoedwgsvfkvvqjwmgkzapjrenhfqus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193670.8107028-257-266073262814550/AnsiballZ_lineinfile.py'
Jan 23 18:41:11 compute-0 sudo[115069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:11 compute-0 python3.9[115071]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:11 compute-0 sudo[115069]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:11 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:12 compute-0 sudo[115221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aelopdlvlwclzirwapxdzgotwhkxwayz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193672.0493493-272-146344984401740/AnsiballZ_setup.py'
Jan 23 18:41:12 compute-0 sudo[115221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:12 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 23 18:41:12 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 23 18:41:12 compute-0 python3.9[115223]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:41:12 compute-0 sudo[115221]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:13 compute-0 ceph-mon[75097]: pgmap v310: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:13 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 23 18:41:13 compute-0 sudo[115305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpezepiaozpnkymdmadwubfabtlwdggv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193672.0493493-272-146344984401740/AnsiballZ_systemd.py'
Jan 23 18:41:13 compute-0 sudo[115305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:13 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 23 18:41:13 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 23 18:41:13 compute-0 python3.9[115307]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:41:13 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 23 18:41:13 compute-0 sudo[115305]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:13 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:14 compute-0 ceph-mon[75097]: 9.11 scrub starts
Jan 23 18:41:14 compute-0 ceph-mon[75097]: 9.11 scrub ok
Jan 23 18:41:14 compute-0 ceph-mon[75097]: 6.c scrub starts
Jan 23 18:41:14 compute-0 ceph-mon[75097]: 6.c scrub ok
Jan 23 18:41:14 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 23 18:41:14 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 23 18:41:14 compute-0 sshd-session[110180]: Connection closed by 192.168.122.30 port 42926
Jan 23 18:41:14 compute-0 sshd-session[110177]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:41:14 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Jan 23 18:41:14 compute-0 systemd[1]: session-37.scope: Consumed 25.234s CPU time.
Jan 23 18:41:14 compute-0 systemd-logind[792]: Session 37 logged out. Waiting for processes to exit.
Jan 23 18:41:14 compute-0 systemd-logind[792]: Removed session 37.
Jan 23 18:41:14 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 23 18:41:14 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 23 18:41:15 compute-0 ceph-mon[75097]: 9.5 scrub starts
Jan 23 18:41:15 compute-0 ceph-mon[75097]: 9.5 scrub ok
Jan 23 18:41:15 compute-0 ceph-mon[75097]: pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:15 compute-0 ceph-mon[75097]: 11.3 scrub starts
Jan 23 18:41:15 compute-0 ceph-mon[75097]: 11.3 scrub ok
Jan 23 18:41:15 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 23 18:41:15 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 23 18:41:15 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 23 18:41:15 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 23 18:41:15 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:41:16 compute-0 ceph-mon[75097]: 9.16 scrub starts
Jan 23 18:41:16 compute-0 ceph-mon[75097]: 9.16 scrub ok
Jan 23 18:41:16 compute-0 ceph-mon[75097]: 11.15 scrub starts
Jan 23 18:41:16 compute-0 ceph-mon[75097]: 11.15 scrub ok
Jan 23 18:41:16 compute-0 ceph-mon[75097]: 6.b scrub starts
Jan 23 18:41:16 compute-0 ceph-mon[75097]: 6.b scrub ok
Jan 23 18:41:17 compute-0 ceph-mon[75097]: pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:17 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:18 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 23 18:41:18 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 23 18:41:19 compute-0 ceph-mon[75097]: pgmap v313: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:19 compute-0 ceph-mon[75097]: 11.d scrub starts
Jan 23 18:41:19 compute-0 ceph-mon[75097]: 11.d scrub ok
Jan 23 18:41:19 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 23 18:41:19 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 23 18:41:19 compute-0 sshd-session[115335]: Accepted publickey for zuul from 192.168.122.30 port 51218 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:41:19 compute-0 systemd-logind[792]: New session 38 of user zuul.
Jan 23 18:41:19 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:19 compute-0 systemd[1]: Started Session 38 of User zuul.
Jan 23 18:41:19 compute-0 sshd-session[115335]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:41:20 compute-0 ceph-mon[75097]: 11.18 scrub starts
Jan 23 18:41:20 compute-0 ceph-mon[75097]: 11.18 scrub ok
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:41:20.192048) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193680192152, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7283, "num_deletes": 251, "total_data_size": 10001374, "memory_usage": 10200496, "flush_reason": "Manual Compaction"}
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193680370214, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7998282, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7426, "table_properties": {"data_size": 7970994, "index_size": 17893, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 76851, "raw_average_key_size": 23, "raw_value_size": 7907155, "raw_average_value_size": 2392, "num_data_blocks": 785, "num_entries": 3305, "num_filter_entries": 3305, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193246, "oldest_key_time": 1769193246, "file_creation_time": 1769193680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 178214 microseconds, and 16976 cpu microseconds.
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:41:20.370267) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7998282 bytes OK
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:41:20.370289) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:41:20.371829) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:41:20.371844) EVENT_LOG_v1 {"time_micros": 1769193680371839, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:41:20.371876) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9969474, prev total WAL file size 9969474, number of live WAL files 2.
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:41:20.374621) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7810KB) 13(58KB) 8(1944B)]
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193680374698, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 8060170, "oldest_snapshot_seqno": -1}
Jan 23 18:41:20 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 23 18:41:20 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 23 18:41:20 compute-0 sudo[115489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxrmtlijwifwlwxuzuhqybhajwexugpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193680.0957403-17-155487510254199/AnsiballZ_file.py'
Jan 23 18:41:20 compute-0 sudo[115489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3131 keys, 8013052 bytes, temperature: kUnknown
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193680700846, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 8013052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7986151, "index_size": 17959, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7877, "raw_key_size": 75291, "raw_average_key_size": 24, "raw_value_size": 7923667, "raw_average_value_size": 2530, "num_data_blocks": 789, "num_entries": 3131, "num_filter_entries": 3131, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 1769193680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:41:20.701142) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 8013052 bytes
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:41:20.703404) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 24.7 rd, 24.6 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.7, 0.0 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3420, records dropped: 289 output_compression: NoCompression
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:41:20.703425) EVENT_LOG_v1 {"time_micros": 1769193680703414, "job": 4, "event": "compaction_finished", "compaction_time_micros": 326247, "compaction_time_cpu_micros": 19637, "output_level": 6, "num_output_files": 1, "total_output_size": 8013052, "num_input_records": 3420, "num_output_records": 3131, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193680704887, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193680704985, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193680705032, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 23 18:41:20 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:41:20.374537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:41:20 compute-0 python3.9[115491]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:20 compute-0 sudo[115489]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:41:21 compute-0 ceph-mon[75097]: pgmap v314: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:21 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 23 18:41:21 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 23 18:41:21 compute-0 sudo[115641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pekwpnhozdxglusadrmmdvwlqarxybky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193681.0375743-29-280522640502231/AnsiballZ_stat.py'
Jan 23 18:41:21 compute-0 sudo[115641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:21 compute-0 python3.9[115643]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:41:21 compute-0 sudo[115641]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:21 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 23 18:41:21 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 23 18:41:21 compute-0 sudo[115719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvgwlbhwzeeexkoxjyntfwsovkcqcmwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193681.0375743-29-280522640502231/AnsiballZ_file.py'
Jan 23 18:41:21 compute-0 sudo[115719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:21 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:22 compute-0 python3.9[115721]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:22 compute-0 sudo[115719]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:22 compute-0 ceph-mon[75097]: 9.b scrub starts
Jan 23 18:41:22 compute-0 ceph-mon[75097]: 9.b scrub ok
Jan 23 18:41:22 compute-0 ceph-mon[75097]: 9.9 scrub starts
Jan 23 18:41:22 compute-0 ceph-mon[75097]: 9.9 scrub ok
Jan 23 18:41:22 compute-0 ceph-mon[75097]: 11.16 scrub starts
Jan 23 18:41:22 compute-0 ceph-mon[75097]: 11.16 scrub ok
Jan 23 18:41:22 compute-0 ceph-mon[75097]: pgmap v315: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:22 compute-0 sshd-session[115338]: Connection closed by 192.168.122.30 port 51218
Jan 23 18:41:22 compute-0 sshd-session[115335]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:41:22 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Jan 23 18:41:22 compute-0 systemd[1]: session-38.scope: Consumed 1.561s CPU time.
Jan 23 18:41:22 compute-0 systemd-logind[792]: Session 38 logged out. Waiting for processes to exit.
Jan 23 18:41:22 compute-0 systemd-logind[792]: Removed session 38.
Jan 23 18:41:22 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Jan 23 18:41:22 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Jan 23 18:41:23 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 23 18:41:23 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 23 18:41:23 compute-0 ceph-mon[75097]: 11.13 scrub starts
Jan 23 18:41:23 compute-0 ceph-mon[75097]: 11.13 scrub ok
Jan 23 18:41:23 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:24 compute-0 ceph-mon[75097]: 11.12 scrub starts
Jan 23 18:41:24 compute-0 ceph-mon[75097]: 11.12 scrub ok
Jan 23 18:41:24 compute-0 ceph-mon[75097]: pgmap v316: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:25 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 23 18:41:25 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 23 18:41:25 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 23 18:41:25 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 23 18:41:25 compute-0 ceph-mon[75097]: 11.9 scrub starts
Jan 23 18:41:25 compute-0 ceph-mon[75097]: 11.9 scrub ok
Jan 23 18:41:25 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.a scrub starts
Jan 23 18:41:25 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.a scrub ok
Jan 23 18:41:25 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:41:26 compute-0 ceph-mon[75097]: 9.d scrub starts
Jan 23 18:41:26 compute-0 ceph-mon[75097]: 9.d scrub ok
Jan 23 18:41:26 compute-0 ceph-mon[75097]: 11.a scrub starts
Jan 23 18:41:26 compute-0 ceph-mon[75097]: 11.a scrub ok
Jan 23 18:41:26 compute-0 ceph-mon[75097]: pgmap v317: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:26 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 23 18:41:26 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 23 18:41:27 compute-0 ceph-mon[75097]: 11.c scrub starts
Jan 23 18:41:27 compute-0 ceph-mon[75097]: 11.c scrub ok
Jan 23 18:41:27 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:28 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 23 18:41:28 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 23 18:41:28 compute-0 ceph-mon[75097]: pgmap v318: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:28 compute-0 ceph-mon[75097]: 9.8 scrub starts
Jan 23 18:41:28 compute-0 ceph-mon[75097]: 9.8 scrub ok
Jan 23 18:41:28 compute-0 sshd-session[115746]: Accepted publickey for zuul from 192.168.122.30 port 38372 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:41:28 compute-0 systemd-logind[792]: New session 39 of user zuul.
Jan 23 18:41:28 compute-0 systemd[1]: Started Session 39 of User zuul.
Jan 23 18:41:28 compute-0 sshd-session[115746]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:41:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:41:28
Jan 23 18:41:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:41:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:41:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'backups', '.mgr']
Jan 23 18:41:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:41:29 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 23 18:41:29 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 23 18:41:29 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 23 18:41:29 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 23 18:41:29 compute-0 ceph-mon[75097]: 9.e scrub starts
Jan 23 18:41:29 compute-0 ceph-mon[75097]: 9.e scrub ok
Jan 23 18:41:29 compute-0 python3.9[115899]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:41:29 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:41:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:41:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:41:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:41:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:41:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:41:30 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 23 18:41:30 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 23 18:41:30 compute-0 ceph-mon[75097]: 9.1 scrub starts
Jan 23 18:41:30 compute-0 ceph-mon[75097]: 9.1 scrub ok
Jan 23 18:41:30 compute-0 ceph-mon[75097]: pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:30 compute-0 ceph-mon[75097]: 9.17 scrub starts
Jan 23 18:41:30 compute-0 ceph-mon[75097]: 9.17 scrub ok
Jan 23 18:41:30 compute-0 sudo[116053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpgawyuotbxxaltewpzkwcyxgrvelddp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193690.2970278-28-54728397563470/AnsiballZ_file.py'
Jan 23 18:41:30 compute-0 sudo[116053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:41:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:41:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:41:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:41:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:41:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:41:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:41:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:41:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:41:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:41:30 compute-0 python3.9[116055]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:30 compute-0 sudo[116053]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:41:31 compute-0 sudo[116228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbjxnxkjhlvvjtpmzwrjbsgkpyrtkdxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193691.1834636-36-46268719518651/AnsiballZ_stat.py'
Jan 23 18:41:31 compute-0 sudo[116228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:31 compute-0 python3.9[116230]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:41:31 compute-0 sudo[116228]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:31 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:32 compute-0 sudo[116306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dszauogevenqaxlnseezhbtmfcswumsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193691.1834636-36-46268719518651/AnsiballZ_file.py'
Jan 23 18:41:32 compute-0 sudo[116306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:32 compute-0 python3.9[116308]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.f58oblm9 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:32 compute-0 sudo[116306]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:33 compute-0 ceph-mon[75097]: pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:33 compute-0 sudo[116458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgthvkfkzibzckrbnewfmehkavkivxig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193692.7465935-56-130194627643652/AnsiballZ_stat.py'
Jan 23 18:41:33 compute-0 sudo[116458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:33 compute-0 python3.9[116460]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:41:33 compute-0 sudo[116458]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:33 compute-0 sudo[116536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amslwudpxgngiocofoesvgqnfyalyefr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193692.7465935-56-130194627643652/AnsiballZ_file.py'
Jan 23 18:41:33 compute-0 sudo[116536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:33 compute-0 python3.9[116538]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.z0iy7vsm recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:33 compute-0 sudo[116536]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:33 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:34 compute-0 sudo[116688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nubqdqrisxgjebuzwhwvxzmydiqwsyaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193693.8699892-69-279872424690554/AnsiballZ_file.py'
Jan 23 18:41:34 compute-0 sudo[116688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:34 compute-0 python3.9[116690]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:41:34 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 23 18:41:34 compute-0 sudo[116688]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:34 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 23 18:41:34 compute-0 sudo[116840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hueyabevvceaciqnqxklmoenrvqjqgqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193694.5552235-77-38727425032376/AnsiballZ_stat.py'
Jan 23 18:41:34 compute-0 sudo[116840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:34 compute-0 python3.9[116842]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:41:35 compute-0 sudo[116840]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:35 compute-0 ceph-mon[75097]: pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:35 compute-0 sudo[116918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlmwsyaaoioysxhbqovzhtexpeknzfmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193694.5552235-77-38727425032376/AnsiballZ_file.py'
Jan 23 18:41:35 compute-0 sudo[116918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:35 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 23 18:41:35 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 23 18:41:35 compute-0 python3.9[116920]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:41:35 compute-0 sudo[116918]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:35 compute-0 sudo[117070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afcpzjvsoobqqxfrglnpopujcpnbyhln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193695.637486-77-247203378085072/AnsiballZ_stat.py'
Jan 23 18:41:35 compute-0 sudo[117070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:35 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:41:36 compute-0 ceph-mon[75097]: 9.3 scrub starts
Jan 23 18:41:36 compute-0 ceph-mon[75097]: 9.3 scrub ok
Jan 23 18:41:36 compute-0 python3.9[117072]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:41:36 compute-0 sudo[117070]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:36 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 23 18:41:36 compute-0 sudo[117148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isrunwadovkpqsfdxrscopgguufwtddg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193695.637486-77-247203378085072/AnsiballZ_file.py'
Jan 23 18:41:36 compute-0 sudo[117148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:36 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 23 18:41:36 compute-0 python3.9[117150]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:41:36 compute-0 sudo[117148]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:36 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 23 18:41:36 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 23 18:41:37 compute-0 ceph-mon[75097]: 9.1d scrub starts
Jan 23 18:41:37 compute-0 ceph-mon[75097]: 9.1d scrub ok
Jan 23 18:41:37 compute-0 ceph-mon[75097]: pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:37 compute-0 ceph-mon[75097]: 11.5 scrub starts
Jan 23 18:41:37 compute-0 ceph-mon[75097]: 11.5 scrub ok
Jan 23 18:41:37 compute-0 sudo[117300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfydsnfmbftavaafuvigswfxmyzwjnyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193696.8048248-100-159757539907711/AnsiballZ_file.py'
Jan 23 18:41:37 compute-0 sudo[117300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:37 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 23 18:41:37 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 23 18:41:37 compute-0 python3.9[117302]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:37 compute-0 sudo[117300]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:37 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 23 18:41:37 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 23 18:41:37 compute-0 sudo[117452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tisknipwdezgatuuaoscpxeoxpqzjeww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193697.556032-108-275189571515971/AnsiballZ_stat.py'
Jan 23 18:41:37 compute-0 sudo[117452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:37 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:38 compute-0 python3.9[117454]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:41:38 compute-0 sudo[117452]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:38 compute-0 ceph-mon[75097]: 9.1c scrub starts
Jan 23 18:41:38 compute-0 ceph-mon[75097]: 9.1c scrub ok
Jan 23 18:41:38 compute-0 ceph-mon[75097]: 9.f scrub starts
Jan 23 18:41:38 compute-0 ceph-mon[75097]: 9.f scrub ok
Jan 23 18:41:38 compute-0 sudo[117530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahynonchcwykgbsmhpdfgapsfegfpiuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193697.556032-108-275189571515971/AnsiballZ_file.py'
Jan 23 18:41:38 compute-0 sudo[117530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:38 compute-0 python3.9[117532]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:38 compute-0 sudo[117530]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:38 compute-0 sudo[117682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ambotjpxaycqexklkmcccmgzxfiletpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193698.7161143-120-112576715285701/AnsiballZ_stat.py'
Jan 23 18:41:38 compute-0 sudo[117682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:39 compute-0 ceph-mon[75097]: 9.1e scrub starts
Jan 23 18:41:39 compute-0 ceph-mon[75097]: 9.1e scrub ok
Jan 23 18:41:39 compute-0 ceph-mon[75097]: pgmap v323: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:39 compute-0 python3.9[117684]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:41:39 compute-0 sudo[117682]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:39 compute-0 sudo[117760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibbfzekiqdpuygbqexwalyxvsyldazwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193698.7161143-120-112576715285701/AnsiballZ_file.py'
Jan 23 18:41:39 compute-0 sudo[117760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:39 compute-0 python3.9[117762]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:39 compute-0 sudo[117760]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:39 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:41:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:41:40 compute-0 sudo[117912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gieyibztfmbcirbziqsowrdsvvrsnspu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193699.8362236-132-110849948915009/AnsiballZ_systemd.py'
Jan 23 18:41:40 compute-0 sudo[117912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:40 compute-0 python3.9[117914]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:41:40 compute-0 systemd[1]: Reloading.
Jan 23 18:41:40 compute-0 systemd-sysv-generator[117939]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:41:40 compute-0 systemd-rc-local-generator[117935]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:41:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:41:41 compute-0 sudo[117912]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:41 compute-0 ceph-mon[75097]: pgmap v324: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:41 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 23 18:41:41 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 23 18:41:41 compute-0 sudo[118100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkqjtfeaymhmvizwmbdwybkkkhaguuyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193701.3166223-140-58068312814837/AnsiballZ_stat.py'
Jan 23 18:41:41 compute-0 sudo[118100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:41 compute-0 python3.9[118102]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:41:41 compute-0 sudo[118100]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:41 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:42 compute-0 sudo[118178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwbpgjkvpkjsxyqxyjmcecdhcfjvmibp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193701.3166223-140-58068312814837/AnsiballZ_file.py'
Jan 23 18:41:42 compute-0 sudo[118178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:42 compute-0 ceph-mon[75097]: 9.c scrub starts
Jan 23 18:41:42 compute-0 python3.9[118180]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:42 compute-0 sudo[118178]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:42 compute-0 sudo[118330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfdejdvjoiszofvfddpxkelzflxifecx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193702.5107024-152-2444154557421/AnsiballZ_stat.py'
Jan 23 18:41:42 compute-0 sudo[118330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:43 compute-0 python3.9[118332]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:41:43 compute-0 sudo[118330]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:43 compute-0 ceph-mon[75097]: 9.c scrub ok
Jan 23 18:41:43 compute-0 ceph-mon[75097]: pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:43 compute-0 sudo[118408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgkymqkafrsuizdrzlblyslsleigtiwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193702.5107024-152-2444154557421/AnsiballZ_file.py'
Jan 23 18:41:43 compute-0 sudo[118408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:43 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 23 18:41:43 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 23 18:41:43 compute-0 python3.9[118410]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:43 compute-0 sudo[118408]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:43 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 23 18:41:43 compute-0 sudo[118560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nblyufibwjxlayzoinigiubecsflqauu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193703.6701581-164-168275076366025/AnsiballZ_systemd.py'
Jan 23 18:41:43 compute-0 sudo[118560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:43 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 23 18:41:43 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:44 compute-0 ceph-mon[75097]: 9.7 scrub starts
Jan 23 18:41:44 compute-0 ceph-mon[75097]: 9.7 scrub ok
Jan 23 18:41:44 compute-0 python3.9[118562]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:41:44 compute-0 systemd[1]: Reloading.
Jan 23 18:41:44 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 23 18:41:44 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 23 18:41:44 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 23 18:41:44 compute-0 systemd-rc-local-generator[118588]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:41:44 compute-0 systemd-sysv-generator[118591]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:41:44 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 23 18:41:44 compute-0 systemd[1]: Starting Create netns directory...
Jan 23 18:41:44 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 23 18:41:44 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 23 18:41:44 compute-0 systemd[1]: Finished Create netns directory.
Jan 23 18:41:44 compute-0 sudo[118560]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:45 compute-0 ceph-mon[75097]: 11.0 scrub starts
Jan 23 18:41:45 compute-0 ceph-mon[75097]: 11.0 scrub ok
Jan 23 18:41:45 compute-0 ceph-mon[75097]: pgmap v326: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:45 compute-0 ceph-mon[75097]: 9.6 scrub starts
Jan 23 18:41:45 compute-0 ceph-mon[75097]: 9.6 scrub ok
Jan 23 18:41:45 compute-0 ceph-mon[75097]: 9.1b scrub starts
Jan 23 18:41:45 compute-0 ceph-mon[75097]: 9.1b scrub ok
Jan 23 18:41:45 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 23 18:41:45 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 23 18:41:45 compute-0 python3.9[118754]: ansible-ansible.builtin.service_facts Invoked
Jan 23 18:41:45 compute-0 network[118771]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 18:41:45 compute-0 network[118772]: 'network-scripts' will be removed from distribution in near future.
Jan 23 18:41:45 compute-0 network[118773]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 18:41:45 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 23 18:41:45 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 23 18:41:45 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:41:46 compute-0 ceph-mon[75097]: 9.19 scrub starts
Jan 23 18:41:47 compute-0 ceph-mon[75097]: 9.19 scrub ok
Jan 23 18:41:47 compute-0 ceph-mon[75097]: 11.7 scrub starts
Jan 23 18:41:47 compute-0 ceph-mon[75097]: 11.7 scrub ok
Jan 23 18:41:47 compute-0 ceph-mon[75097]: pgmap v327: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:47 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 23 18:41:47 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 23 18:41:47 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:48 compute-0 ceph-mon[75097]: 11.1d scrub starts
Jan 23 18:41:48 compute-0 ceph-mon[75097]: 11.1d scrub ok
Jan 23 18:41:48 compute-0 ceph-mon[75097]: pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:48 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 23 18:41:48 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 23 18:41:49 compute-0 ceph-mon[75097]: 9.15 scrub starts
Jan 23 18:41:49 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:50 compute-0 ceph-mon[75097]: 9.15 scrub ok
Jan 23 18:41:50 compute-0 ceph-mon[75097]: pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:50 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 23 18:41:50 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 23 18:41:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:41:51 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 23 18:41:51 compute-0 ceph-mon[75097]: 9.18 scrub starts
Jan 23 18:41:51 compute-0 ceph-mon[75097]: 9.18 scrub ok
Jan 23 18:41:51 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 23 18:41:51 compute-0 sudo[119033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liafbyvfdqzjwccscuffpracncbqmvoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193711.0475783-190-273184675884428/AnsiballZ_stat.py'
Jan 23 18:41:51 compute-0 sudo[119033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:51 compute-0 python3.9[119035]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:41:51 compute-0 sudo[119033]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:51 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:52 compute-0 sudo[119111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxzoyrgdcquywnivqisadqzqawcwxqyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193711.0475783-190-273184675884428/AnsiballZ_file.py'
Jan 23 18:41:52 compute-0 sudo[119111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:52 compute-0 python3.9[119113]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:52 compute-0 sudo[119111]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:52 compute-0 ceph-mon[75097]: 9.13 scrub starts
Jan 23 18:41:52 compute-0 ceph-mon[75097]: 9.13 scrub ok
Jan 23 18:41:52 compute-0 ceph-mon[75097]: pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:52 compute-0 sudo[119263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ankonfkqbhpiszceafierdhcjewcshqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193712.463363-203-26292171710632/AnsiballZ_file.py'
Jan 23 18:41:52 compute-0 sudo[119263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:53 compute-0 python3.9[119265]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:53 compute-0 sudo[119263]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:53 compute-0 sudo[119415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdvhejjvtnkskoqgloansizofyvxjker ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193713.1935093-211-68923397062978/AnsiballZ_stat.py'
Jan 23 18:41:53 compute-0 sudo[119415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:53 compute-0 python3.9[119417]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:41:53 compute-0 sudo[119415]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:53 compute-0 sudo[119493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqxlmmzfnhhtepsagrjsclrqrumewvfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193713.1935093-211-68923397062978/AnsiballZ_file.py'
Jan 23 18:41:53 compute-0 sudo[119493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:53 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:54 compute-0 python3.9[119495]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:54 compute-0 sudo[119493]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:54 compute-0 sudo[119645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifbklhjjeeqbynnbucpgnigcpfyfupzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193714.4599216-226-237803649384675/AnsiballZ_timezone.py'
Jan 23 18:41:54 compute-0 sudo[119645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:55 compute-0 ceph-mon[75097]: pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:55 compute-0 python3.9[119647]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 23 18:41:55 compute-0 systemd[1]: Starting Time & Date Service...
Jan 23 18:41:55 compute-0 systemd[1]: Started Time & Date Service.
Jan 23 18:41:55 compute-0 sudo[119645]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:55 compute-0 sudo[119801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdynrksgpincvkwqpjsesprsixtacpvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193715.536737-235-128414526401745/AnsiballZ_file.py'
Jan 23 18:41:55 compute-0 sudo[119801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:55 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:56 compute-0 python3.9[119803]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:56 compute-0 sudo[119801]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:41:56 compute-0 sudo[119953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gogfxepylfuroybieznwhlptpkpcjfzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193716.2715054-243-57016303775059/AnsiballZ_stat.py'
Jan 23 18:41:56 compute-0 sudo[119953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:56 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 23 18:41:56 compute-0 python3.9[119955]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:41:56 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 23 18:41:56 compute-0 sudo[119953]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:57 compute-0 ceph-mon[75097]: pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:57 compute-0 ceph-mon[75097]: 9.14 scrub starts
Jan 23 18:41:57 compute-0 ceph-mon[75097]: 9.14 scrub ok
Jan 23 18:41:57 compute-0 sudo[120031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wytvcmwirdkyolyvvhpybswrhuckllxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193716.2715054-243-57016303775059/AnsiballZ_file.py'
Jan 23 18:41:57 compute-0 sudo[120031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:57 compute-0 python3.9[120033]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:57 compute-0 sudo[120031]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:57 compute-0 sudo[120183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvbtsbpcrisehqhnpqaqssxvdsdlgxji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193717.4618263-255-67285192760652/AnsiballZ_stat.py'
Jan 23 18:41:57 compute-0 sudo[120183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:57 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 23 18:41:57 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 23 18:41:57 compute-0 python3.9[120185]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:41:57 compute-0 sudo[120183]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:57 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:58 compute-0 ceph-mon[75097]: 9.10 scrub starts
Jan 23 18:41:58 compute-0 ceph-mon[75097]: 9.10 scrub ok
Jan 23 18:41:58 compute-0 sudo[120261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xslqaemwdhzduqhitagmpbnldymkqvvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193717.4618263-255-67285192760652/AnsiballZ_file.py'
Jan 23 18:41:58 compute-0 sudo[120261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:58 compute-0 python3.9[120263]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.4a_ymw28 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:58 compute-0 sudo[120261]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:58 compute-0 sudo[120413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiluvcvzncmtebyvkyuavdzvbbblbzah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193718.6000237-267-121542429214444/AnsiballZ_stat.py'
Jan 23 18:41:58 compute-0 sudo[120413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:59 compute-0 python3.9[120415]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:41:59 compute-0 ceph-mon[75097]: pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:41:59 compute-0 sudo[120413]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:59 compute-0 sudo[120491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajrubxtwfmbltqmjxlxsvxqbhrbaggwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193718.6000237-267-121542429214444/AnsiballZ_file.py'
Jan 23 18:41:59 compute-0 sudo[120491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:41:59 compute-0 python3.9[120493]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:41:59 compute-0 sudo[120491]: pam_unix(sudo:session): session closed for user root
Jan 23 18:41:59 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:42:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:42:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:42:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:42:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:42:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:42:00 compute-0 sudo[120643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frwgoewitopbfrjjsorrsfgefrefzvtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193719.7693253-280-133003365988275/AnsiballZ_command.py'
Jan 23 18:42:00 compute-0 sudo[120643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:00 compute-0 python3.9[120645]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:42:00 compute-0 sudo[120643]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:42:01 compute-0 ceph-mon[75097]: pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:01 compute-0 sudo[120796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feaoqjpvmzmqxyczftxggfwrizqzmsiv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769193720.7152846-288-188017210878184/AnsiballZ_edpm_nftables_from_files.py'
Jan 23 18:42:01 compute-0 sudo[120796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:01 compute-0 python3[120798]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 23 18:42:01 compute-0 sudo[120796]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:01 compute-0 sudo[120948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcgqcyiilqmsujdjklkpalsfhrwlevyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193721.5716987-296-117452580161665/AnsiballZ_stat.py'
Jan 23 18:42:01 compute-0 sudo[120948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:01 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:02 compute-0 python3.9[120950]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:42:02 compute-0 sudo[120948]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:02 compute-0 sudo[121026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujyoemuxtltiwptvwdccjtyznqbrsdea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193721.5716987-296-117452580161665/AnsiballZ_file.py'
Jan 23 18:42:02 compute-0 sudo[121026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:02 compute-0 python3.9[121028]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:42:02 compute-0 sudo[121026]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:03 compute-0 sudo[121178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtvklrtitsyymcodsgsqxeeizjkttjkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193722.7614477-308-100171114743605/AnsiballZ_stat.py'
Jan 23 18:42:03 compute-0 sudo[121178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:03 compute-0 ceph-mon[75097]: pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:03 compute-0 python3.9[121180]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:42:03 compute-0 sudo[121178]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:03 compute-0 sudo[121303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjohcyrpalhzvgmjkvzimugyvtqpshze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193722.7614477-308-100171114743605/AnsiballZ_copy.py'
Jan 23 18:42:03 compute-0 sudo[121303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:03 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 23 18:42:03 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 23 18:42:03 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:04 compute-0 python3.9[121305]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193722.7614477-308-100171114743605/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:42:04 compute-0 sudo[121303]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:04 compute-0 ceph-mon[75097]: 9.12 scrub starts
Jan 23 18:42:04 compute-0 ceph-mon[75097]: 9.12 scrub ok
Jan 23 18:42:04 compute-0 sudo[121455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-femmvakoklvnnpnidnyyflyqiahlhdvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193724.2272005-323-231609252832625/AnsiballZ_stat.py'
Jan 23 18:42:04 compute-0 sudo[121455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:04 compute-0 python3.9[121457]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:42:04 compute-0 sudo[121455]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:05 compute-0 sudo[121533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcfhzedlotdbqzxnalwnfssddmmoitah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193724.2272005-323-231609252832625/AnsiballZ_file.py'
Jan 23 18:42:05 compute-0 sudo[121533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:05 compute-0 python3.9[121535]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:42:05 compute-0 sudo[121533]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:05 compute-0 ceph-mon[75097]: pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:05 compute-0 sudo[121685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-potlqfuefbitmmuxeaniflhonfzjhumq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193725.4743392-335-247412878503492/AnsiballZ_stat.py'
Jan 23 18:42:05 compute-0 sudo[121685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:05 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 23 18:42:05 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 23 18:42:05 compute-0 python3.9[121687]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:42:05 compute-0 sudo[121685]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:05 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:42:06 compute-0 sudo[121763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pttdazgsbgjbcoshhakhsucyadctrntf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193725.4743392-335-247412878503492/AnsiballZ_file.py'
Jan 23 18:42:06 compute-0 sudo[121763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:06 compute-0 python3.9[121765]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:42:06 compute-0 sudo[121763]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:06 compute-0 ceph-mon[75097]: 9.2 scrub starts
Jan 23 18:42:06 compute-0 ceph-mon[75097]: 9.2 scrub ok
Jan 23 18:42:06 compute-0 ceph-mon[75097]: pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:06 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 23 18:42:06 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 23 18:42:06 compute-0 sudo[121915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvbiypaepeshlqgkiemiswycwfmukofd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193726.5819848-347-241652051224004/AnsiballZ_stat.py'
Jan 23 18:42:06 compute-0 sudo[121915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:07 compute-0 python3.9[121917]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:42:07 compute-0 sudo[121915]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:07 compute-0 sudo[121993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgpnbddsscshibpwztiqyqyurjecfgsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193726.5819848-347-241652051224004/AnsiballZ_file.py'
Jan 23 18:42:07 compute-0 sudo[121993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:07 compute-0 ceph-mon[75097]: 9.0 scrub starts
Jan 23 18:42:07 compute-0 ceph-mon[75097]: 9.0 scrub ok
Jan 23 18:42:07 compute-0 python3.9[121995]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:42:07 compute-0 sudo[121993]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:07 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:08 compute-0 sudo[122145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqxlashsltyoiqqogslccvtcxfteifiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193727.8579118-360-212597012544923/AnsiballZ_command.py'
Jan 23 18:42:08 compute-0 sudo[122145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:08 compute-0 python3.9[122147]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:42:08 compute-0 ceph-mon[75097]: pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:08 compute-0 sudo[122145]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:08 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 23 18:42:08 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 23 18:42:09 compute-0 sudo[122300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxwccwpjykllfwwfykionzqlovncglfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193728.6635668-368-20767079984343/AnsiballZ_blockinfile.py'
Jan 23 18:42:09 compute-0 sudo[122300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:09 compute-0 python3.9[122302]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:42:09 compute-0 sudo[122300]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:09 compute-0 ceph-mon[75097]: 9.a scrub starts
Jan 23 18:42:09 compute-0 ceph-mon[75097]: 9.a scrub ok
Jan 23 18:42:09 compute-0 sudo[122368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:42:09 compute-0 sudo[122368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:42:09 compute-0 sudo[122368]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:09 compute-0 sudo[122407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:42:09 compute-0 sudo[122407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:42:09 compute-0 sudo[122502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmpxffnedvdoljhbulzbdwpizdgbylmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193729.6026103-377-239336163806902/AnsiballZ_file.py'
Jan 23 18:42:09 compute-0 sudo[122502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:10 compute-0 python3.9[122504]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:42:10 compute-0 sudo[122502]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:10 compute-0 sudo[122407]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:42:10 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:42:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:42:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:42:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:42:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:42:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:42:10 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:42:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:42:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:42:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:42:10 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:42:10 compute-0 sudo[122612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:42:10 compute-0 sudo[122612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:42:10 compute-0 sudo[122612]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:10 compute-0 sudo[122660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:42:10 compute-0 sudo[122660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:42:10 compute-0 sudo[122735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjqnrkpnapnsrdnubkkidfpkaictwway ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193730.239475-377-114645862806385/AnsiballZ_file.py'
Jan 23 18:42:10 compute-0 sudo[122735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:10 compute-0 ceph-mon[75097]: pgmap v339: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:42:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:42:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:42:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:42:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:42:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:42:10 compute-0 python3.9[122737]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:42:10 compute-0 podman[122750]: 2026-01-23 18:42:10.704473974 +0000 UTC m=+0.047536669 container create 6a744545e40ac480a76c438b21327eef06d776d55de7c97bbbfd44e5f442fe62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 23 18:42:10 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 23 18:42:10 compute-0 sudo[122735]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:10 compute-0 systemd[1]: Started libpod-conmon-6a744545e40ac480a76c438b21327eef06d776d55de7c97bbbfd44e5f442fe62.scope.
Jan 23 18:42:10 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 23 18:42:10 compute-0 podman[122750]: 2026-01-23 18:42:10.680130435 +0000 UTC m=+0.023193150 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:42:10 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:42:10 compute-0 podman[122750]: 2026-01-23 18:42:10.841704013 +0000 UTC m=+0.184766748 container init 6a744545e40ac480a76c438b21327eef06d776d55de7c97bbbfd44e5f442fe62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_zhukovsky, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 18:42:10 compute-0 podman[122750]: 2026-01-23 18:42:10.850092273 +0000 UTC m=+0.193154968 container start 6a744545e40ac480a76c438b21327eef06d776d55de7c97bbbfd44e5f442fe62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 18:42:10 compute-0 podman[122750]: 2026-01-23 18:42:10.854285368 +0000 UTC m=+0.197348123 container attach 6a744545e40ac480a76c438b21327eef06d776d55de7c97bbbfd44e5f442fe62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:42:10 compute-0 eager_zhukovsky[122769]: 167 167
Jan 23 18:42:10 compute-0 systemd[1]: libpod-6a744545e40ac480a76c438b21327eef06d776d55de7c97bbbfd44e5f442fe62.scope: Deactivated successfully.
Jan 23 18:42:10 compute-0 podman[122750]: 2026-01-23 18:42:10.857052757 +0000 UTC m=+0.200115452 container died 6a744545e40ac480a76c438b21327eef06d776d55de7c97bbbfd44e5f442fe62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_zhukovsky, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 23 18:42:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bfe572a43441502d0171f7c15f9a35e9f1b5c62c78473aeabfda82e0c5ded1e-merged.mount: Deactivated successfully.
Jan 23 18:42:10 compute-0 podman[122750]: 2026-01-23 18:42:10.981499086 +0000 UTC m=+0.324561781 container remove 6a744545e40ac480a76c438b21327eef06d776d55de7c97bbbfd44e5f442fe62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:42:11 compute-0 systemd[1]: libpod-conmon-6a744545e40ac480a76c438b21327eef06d776d55de7c97bbbfd44e5f442fe62.scope: Deactivated successfully.
Jan 23 18:42:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:42:11 compute-0 podman[122866]: 2026-01-23 18:42:11.166820258 +0000 UTC m=+0.040812371 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:42:11 compute-0 podman[122866]: 2026-01-23 18:42:11.338874268 +0000 UTC m=+0.212866361 container create 39941e8e17a3831971bdf1898b216c50b846b4208deaf9007e49f9ad95fd80b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_faraday, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 23 18:42:11 compute-0 sudo[122954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cscmoylbesjtjklgmawsusslumuulrnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193730.889261-392-73088310978254/AnsiballZ_mount.py'
Jan 23 18:42:11 compute-0 sudo[122954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:11 compute-0 systemd[1]: Started libpod-conmon-39941e8e17a3831971bdf1898b216c50b846b4208deaf9007e49f9ad95fd80b4.scope.
Jan 23 18:42:11 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92d0aa09303a045fc7b914ff049618c3bedd2bc1ea8ba29fcb1e95c74de5cdc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92d0aa09303a045fc7b914ff049618c3bedd2bc1ea8ba29fcb1e95c74de5cdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92d0aa09303a045fc7b914ff049618c3bedd2bc1ea8ba29fcb1e95c74de5cdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92d0aa09303a045fc7b914ff049618c3bedd2bc1ea8ba29fcb1e95c74de5cdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92d0aa09303a045fc7b914ff049618c3bedd2bc1ea8ba29fcb1e95c74de5cdc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:42:11 compute-0 podman[122866]: 2026-01-23 18:42:11.465535903 +0000 UTC m=+0.339528076 container init 39941e8e17a3831971bdf1898b216c50b846b4208deaf9007e49f9ad95fd80b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:42:11 compute-0 podman[122866]: 2026-01-23 18:42:11.48380933 +0000 UTC m=+0.357801423 container start 39941e8e17a3831971bdf1898b216c50b846b4208deaf9007e49f9ad95fd80b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_faraday, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 18:42:11 compute-0 podman[122866]: 2026-01-23 18:42:11.493456581 +0000 UTC m=+0.367448704 container attach 39941e8e17a3831971bdf1898b216c50b846b4208deaf9007e49f9ad95fd80b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 23 18:42:11 compute-0 python3.9[122958]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 23 18:42:11 compute-0 sudo[122954]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:11 compute-0 ceph-mon[75097]: 9.4 scrub starts
Jan 23 18:42:11 compute-0 ceph-mon[75097]: 9.4 scrub ok
Jan 23 18:42:11 compute-0 great_faraday[122959]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:42:11 compute-0 great_faraday[122959]: --> All data devices are unavailable
Jan 23 18:42:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:12 compute-0 systemd[1]: libpod-39941e8e17a3831971bdf1898b216c50b846b4208deaf9007e49f9ad95fd80b4.scope: Deactivated successfully.
Jan 23 18:42:12 compute-0 podman[122866]: 2026-01-23 18:42:12.010343979 +0000 UTC m=+0.884336102 container died 39941e8e17a3831971bdf1898b216c50b846b4208deaf9007e49f9ad95fd80b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_faraday, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:42:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d92d0aa09303a045fc7b914ff049618c3bedd2bc1ea8ba29fcb1e95c74de5cdc-merged.mount: Deactivated successfully.
Jan 23 18:42:12 compute-0 podman[122866]: 2026-01-23 18:42:12.067125518 +0000 UTC m=+0.941117611 container remove 39941e8e17a3831971bdf1898b216c50b846b4208deaf9007e49f9ad95fd80b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_faraday, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 18:42:12 compute-0 systemd[1]: libpod-conmon-39941e8e17a3831971bdf1898b216c50b846b4208deaf9007e49f9ad95fd80b4.scope: Deactivated successfully.
Jan 23 18:42:12 compute-0 sudo[122660]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:12 compute-0 sudo[123140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viyrpunkiyxklfowbbujllqgfkdcldug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193731.7765563-392-83956193238284/AnsiballZ_mount.py'
Jan 23 18:42:12 compute-0 sudo[123140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:12 compute-0 sudo[123142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:42:12 compute-0 sudo[123142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:42:12 compute-0 sudo[123142]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:12 compute-0 sudo[123168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:42:12 compute-0 sudo[123168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:42:12 compute-0 python3.9[123143]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 23 18:42:12 compute-0 sudo[123140]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:12 compute-0 podman[123230]: 2026-01-23 18:42:12.531026161 +0000 UTC m=+0.049799105 container create d1e0a5541f9d0d60431bb23a2f83c12548399cf1fca6c45812da721774dc43ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_newton, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:42:12 compute-0 systemd[1]: Started libpod-conmon-d1e0a5541f9d0d60431bb23a2f83c12548399cf1fca6c45812da721774dc43ff.scope.
Jan 23 18:42:12 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:42:12 compute-0 podman[123230]: 2026-01-23 18:42:12.60900802 +0000 UTC m=+0.127780984 container init d1e0a5541f9d0d60431bb23a2f83c12548399cf1fca6c45812da721774dc43ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_newton, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:42:12 compute-0 podman[123230]: 2026-01-23 18:42:12.515553024 +0000 UTC m=+0.034325988 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:42:12 compute-0 podman[123230]: 2026-01-23 18:42:12.616935678 +0000 UTC m=+0.135708612 container start d1e0a5541f9d0d60431bb23a2f83c12548399cf1fca6c45812da721774dc43ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_newton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 18:42:12 compute-0 podman[123230]: 2026-01-23 18:42:12.620243081 +0000 UTC m=+0.139016025 container attach d1e0a5541f9d0d60431bb23a2f83c12548399cf1fca6c45812da721774dc43ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_newton, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 18:42:12 compute-0 peaceful_newton[123246]: 167 167
Jan 23 18:42:12 compute-0 systemd[1]: libpod-d1e0a5541f9d0d60431bb23a2f83c12548399cf1fca6c45812da721774dc43ff.scope: Deactivated successfully.
Jan 23 18:42:12 compute-0 conmon[123246]: conmon d1e0a5541f9d0d60431b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d1e0a5541f9d0d60431bb23a2f83c12548399cf1fca6c45812da721774dc43ff.scope/container/memory.events
Jan 23 18:42:12 compute-0 podman[123230]: 2026-01-23 18:42:12.623608605 +0000 UTC m=+0.142381619 container died d1e0a5541f9d0d60431bb23a2f83c12548399cf1fca6c45812da721774dc43ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_newton, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 18:42:12 compute-0 ceph-mon[75097]: pgmap v340: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-14c0f4987ecde09b6d60c197895e2a286b279d0053c6f35322f5278c9580e5ee-merged.mount: Deactivated successfully.
Jan 23 18:42:12 compute-0 podman[123230]: 2026-01-23 18:42:12.666742093 +0000 UTC m=+0.185515037 container remove d1e0a5541f9d0d60431bb23a2f83c12548399cf1fca6c45812da721774dc43ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 23 18:42:12 compute-0 sshd-session[115749]: Connection closed by 192.168.122.30 port 38372
Jan 23 18:42:12 compute-0 sshd-session[115746]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:42:12 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Jan 23 18:42:12 compute-0 systemd[1]: session-39.scope: Consumed 31.643s CPU time.
Jan 23 18:42:12 compute-0 systemd[1]: libpod-conmon-d1e0a5541f9d0d60431bb23a2f83c12548399cf1fca6c45812da721774dc43ff.scope: Deactivated successfully.
Jan 23 18:42:12 compute-0 systemd-logind[792]: Session 39 logged out. Waiting for processes to exit.
Jan 23 18:42:12 compute-0 systemd-logind[792]: Removed session 39.
Jan 23 18:42:12 compute-0 podman[123270]: 2026-01-23 18:42:12.881368726 +0000 UTC m=+0.065178689 container create 277806eb7480d84261ea1b1ed72bd908d9813986444e7118939f310ee1b9d1bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hermann, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:42:12 compute-0 systemd[1]: Started libpod-conmon-277806eb7480d84261ea1b1ed72bd908d9813986444e7118939f310ee1b9d1bf.scope.
Jan 23 18:42:12 compute-0 podman[123270]: 2026-01-23 18:42:12.857465999 +0000 UTC m=+0.041275952 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:42:12 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:42:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da508e561967ebe4fb90198e834dde9d13dad4bba06bcc8e9d6999a3ae489b59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:42:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da508e561967ebe4fb90198e834dde9d13dad4bba06bcc8e9d6999a3ae489b59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:42:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da508e561967ebe4fb90198e834dde9d13dad4bba06bcc8e9d6999a3ae489b59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:42:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da508e561967ebe4fb90198e834dde9d13dad4bba06bcc8e9d6999a3ae489b59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:42:12 compute-0 podman[123270]: 2026-01-23 18:42:12.992921014 +0000 UTC m=+0.176731017 container init 277806eb7480d84261ea1b1ed72bd908d9813986444e7118939f310ee1b9d1bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hermann, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 18:42:13 compute-0 podman[123270]: 2026-01-23 18:42:13.006111473 +0000 UTC m=+0.189921426 container start 277806eb7480d84261ea1b1ed72bd908d9813986444e7118939f310ee1b9d1bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hermann, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:42:13 compute-0 podman[123270]: 2026-01-23 18:42:13.012473243 +0000 UTC m=+0.196283166 container attach 277806eb7480d84261ea1b1ed72bd908d9813986444e7118939f310ee1b9d1bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hermann, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:42:13 compute-0 modest_hermann[123286]: {
Jan 23 18:42:13 compute-0 modest_hermann[123286]:     "0": [
Jan 23 18:42:13 compute-0 modest_hermann[123286]:         {
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "devices": [
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "/dev/loop3"
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             ],
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "lv_name": "ceph_lv0",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "lv_size": "21470642176",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "name": "ceph_lv0",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "tags": {
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.cluster_name": "ceph",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.crush_device_class": "",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.encrypted": "0",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.objectstore": "bluestore",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.osd_id": "0",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.type": "block",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.vdo": "0",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.with_tpm": "0"
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             },
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "type": "block",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "vg_name": "ceph_vg0"
Jan 23 18:42:13 compute-0 modest_hermann[123286]:         }
Jan 23 18:42:13 compute-0 modest_hermann[123286]:     ],
Jan 23 18:42:13 compute-0 modest_hermann[123286]:     "1": [
Jan 23 18:42:13 compute-0 modest_hermann[123286]:         {
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "devices": [
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "/dev/loop4"
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             ],
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "lv_name": "ceph_lv1",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "lv_size": "21470642176",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "name": "ceph_lv1",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "tags": {
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.cluster_name": "ceph",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.crush_device_class": "",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.encrypted": "0",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.objectstore": "bluestore",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.osd_id": "1",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.type": "block",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.vdo": "0",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.with_tpm": "0"
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             },
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "type": "block",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "vg_name": "ceph_vg1"
Jan 23 18:42:13 compute-0 modest_hermann[123286]:         }
Jan 23 18:42:13 compute-0 modest_hermann[123286]:     ],
Jan 23 18:42:13 compute-0 modest_hermann[123286]:     "2": [
Jan 23 18:42:13 compute-0 modest_hermann[123286]:         {
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "devices": [
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "/dev/loop5"
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             ],
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "lv_name": "ceph_lv2",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "lv_size": "21470642176",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "name": "ceph_lv2",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "tags": {
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.cluster_name": "ceph",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.crush_device_class": "",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.encrypted": "0",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.objectstore": "bluestore",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.osd_id": "2",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.type": "block",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.vdo": "0",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:                 "ceph.with_tpm": "0"
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             },
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "type": "block",
Jan 23 18:42:13 compute-0 modest_hermann[123286]:             "vg_name": "ceph_vg2"
Jan 23 18:42:13 compute-0 modest_hermann[123286]:         }
Jan 23 18:42:13 compute-0 modest_hermann[123286]:     ]
Jan 23 18:42:13 compute-0 modest_hermann[123286]: }
Jan 23 18:42:13 compute-0 systemd[1]: libpod-277806eb7480d84261ea1b1ed72bd908d9813986444e7118939f310ee1b9d1bf.scope: Deactivated successfully.
Jan 23 18:42:13 compute-0 podman[123270]: 2026-01-23 18:42:13.363224048 +0000 UTC m=+0.547034001 container died 277806eb7480d84261ea1b1ed72bd908d9813986444e7118939f310ee1b9d1bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hermann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 18:42:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-da508e561967ebe4fb90198e834dde9d13dad4bba06bcc8e9d6999a3ae489b59-merged.mount: Deactivated successfully.
Jan 23 18:42:13 compute-0 podman[123270]: 2026-01-23 18:42:13.419001782 +0000 UTC m=+0.602811735 container remove 277806eb7480d84261ea1b1ed72bd908d9813986444e7118939f310ee1b9d1bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hermann, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:42:13 compute-0 systemd[1]: libpod-conmon-277806eb7480d84261ea1b1ed72bd908d9813986444e7118939f310ee1b9d1bf.scope: Deactivated successfully.
Jan 23 18:42:13 compute-0 sudo[123168]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:13 compute-0 sudo[123308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:42:13 compute-0 sudo[123308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:42:13 compute-0 sudo[123308]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:13 compute-0 sudo[123333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:42:13 compute-0 sudo[123333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:42:13 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 23 18:42:13 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 23 18:42:13 compute-0 podman[123370]: 2026-01-23 18:42:13.90871351 +0000 UTC m=+0.038522543 container create c8d87348c3081c8306580c904f873f9ac3e84d721b41dfaeeea2ffdd1e5bad9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 23 18:42:13 compute-0 systemd[1]: Started libpod-conmon-c8d87348c3081c8306580c904f873f9ac3e84d721b41dfaeeea2ffdd1e5bad9d.scope.
Jan 23 18:42:13 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:42:13 compute-0 ceph-mon[75097]: 9.1a scrub starts
Jan 23 18:42:13 compute-0 ceph-mon[75097]: 9.1a scrub ok
Jan 23 18:42:13 compute-0 podman[123370]: 2026-01-23 18:42:13.892342431 +0000 UTC m=+0.022151484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:42:13 compute-0 podman[123370]: 2026-01-23 18:42:13.989798777 +0000 UTC m=+0.119607830 container init c8d87348c3081c8306580c904f873f9ac3e84d721b41dfaeeea2ffdd1e5bad9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 23 18:42:13 compute-0 podman[123370]: 2026-01-23 18:42:13.9991282 +0000 UTC m=+0.128937233 container start c8d87348c3081c8306580c904f873f9ac3e84d721b41dfaeeea2ffdd1e5bad9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 18:42:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:14 compute-0 podman[123370]: 2026-01-23 18:42:14.002952835 +0000 UTC m=+0.132762048 container attach c8d87348c3081c8306580c904f873f9ac3e84d721b41dfaeeea2ffdd1e5bad9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_germain, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 23 18:42:14 compute-0 wonderful_germain[123386]: 167 167
Jan 23 18:42:14 compute-0 systemd[1]: libpod-c8d87348c3081c8306580c904f873f9ac3e84d721b41dfaeeea2ffdd1e5bad9d.scope: Deactivated successfully.
Jan 23 18:42:14 compute-0 podman[123370]: 2026-01-23 18:42:14.006421022 +0000 UTC m=+0.136230055 container died c8d87348c3081c8306580c904f873f9ac3e84d721b41dfaeeea2ffdd1e5bad9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_germain, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 23 18:42:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-0876e22cacc311f5492c148867a4d21de59c0c65c34268f080d38172de5bfae7-merged.mount: Deactivated successfully.
Jan 23 18:42:14 compute-0 podman[123370]: 2026-01-23 18:42:14.048533725 +0000 UTC m=+0.178342768 container remove c8d87348c3081c8306580c904f873f9ac3e84d721b41dfaeeea2ffdd1e5bad9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:42:14 compute-0 systemd[1]: libpod-conmon-c8d87348c3081c8306580c904f873f9ac3e84d721b41dfaeeea2ffdd1e5bad9d.scope: Deactivated successfully.
Jan 23 18:42:14 compute-0 podman[123411]: 2026-01-23 18:42:14.195790594 +0000 UTC m=+0.044676507 container create 4cd50e6a7c6856d2e948ca1edde34ddbe169c6de4475aecc8035c4bd9a9464cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:42:14 compute-0 systemd[1]: Started libpod-conmon-4cd50e6a7c6856d2e948ca1edde34ddbe169c6de4475aecc8035c4bd9a9464cc.scope.
Jan 23 18:42:14 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab7006f3265ada10d884c205dd2cb66bc0fb08f84c760595693d009c1eab673/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab7006f3265ada10d884c205dd2cb66bc0fb08f84c760595693d009c1eab673/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab7006f3265ada10d884c205dd2cb66bc0fb08f84c760595693d009c1eab673/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab7006f3265ada10d884c205dd2cb66bc0fb08f84c760595693d009c1eab673/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:42:14 compute-0 podman[123411]: 2026-01-23 18:42:14.170953213 +0000 UTC m=+0.019839136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:42:14 compute-0 podman[123411]: 2026-01-23 18:42:14.284385648 +0000 UTC m=+0.133271651 container init 4cd50e6a7c6856d2e948ca1edde34ddbe169c6de4475aecc8035c4bd9a9464cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:42:14 compute-0 podman[123411]: 2026-01-23 18:42:14.295868216 +0000 UTC m=+0.144754159 container start 4cd50e6a7c6856d2e948ca1edde34ddbe169c6de4475aecc8035c4bd9a9464cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bell, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 23 18:42:14 compute-0 podman[123411]: 2026-01-23 18:42:14.30524052 +0000 UTC m=+0.154126533 container attach 4cd50e6a7c6856d2e948ca1edde34ddbe169c6de4475aecc8035c4bd9a9464cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 18:42:14 compute-0 lvm[123504]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:42:14 compute-0 lvm[123504]: VG ceph_vg0 finished
Jan 23 18:42:14 compute-0 lvm[123506]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:42:14 compute-0 lvm[123506]: VG ceph_vg1 finished
Jan 23 18:42:14 compute-0 lvm[123508]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:42:14 compute-0 lvm[123508]: VG ceph_vg2 finished
Jan 23 18:42:15 compute-0 ceph-mon[75097]: pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:15 compute-0 awesome_bell[123427]: {}
Jan 23 18:42:15 compute-0 systemd[1]: libpod-4cd50e6a7c6856d2e948ca1edde34ddbe169c6de4475aecc8035c4bd9a9464cc.scope: Deactivated successfully.
Jan 23 18:42:15 compute-0 systemd[1]: libpod-4cd50e6a7c6856d2e948ca1edde34ddbe169c6de4475aecc8035c4bd9a9464cc.scope: Consumed 1.220s CPU time.
Jan 23 18:42:15 compute-0 podman[123411]: 2026-01-23 18:42:15.074171046 +0000 UTC m=+0.923056959 container died 4cd50e6a7c6856d2e948ca1edde34ddbe169c6de4475aecc8035c4bd9a9464cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bell, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 18:42:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-aab7006f3265ada10d884c205dd2cb66bc0fb08f84c760595693d009c1eab673-merged.mount: Deactivated successfully.
Jan 23 18:42:15 compute-0 podman[123411]: 2026-01-23 18:42:15.114979347 +0000 UTC m=+0.963865260 container remove 4cd50e6a7c6856d2e948ca1edde34ddbe169c6de4475aecc8035c4bd9a9464cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bell, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:42:15 compute-0 systemd[1]: libpod-conmon-4cd50e6a7c6856d2e948ca1edde34ddbe169c6de4475aecc8035c4bd9a9464cc.scope: Deactivated successfully.
Jan 23 18:42:15 compute-0 sudo[123333]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:42:15 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:42:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:42:15 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:42:15 compute-0 sudo[123524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:42:15 compute-0 sudo[123524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:42:15 compute-0 sudo[123524]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:42:16 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:42:16 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:42:16 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 23 18:42:16 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 23 18:42:17 compute-0 ceph-mon[75097]: pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:17 compute-0 ceph-mon[75097]: 9.1f scrub starts
Jan 23 18:42:17 compute-0 ceph-mon[75097]: 9.1f scrub ok
Jan 23 18:42:17 compute-0 sshd-session[123549]: Accepted publickey for zuul from 192.168.122.30 port 35060 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:42:17 compute-0 systemd-logind[792]: New session 40 of user zuul.
Jan 23 18:42:17 compute-0 systemd[1]: Started Session 40 of User zuul.
Jan 23 18:42:17 compute-0 sshd-session[123549]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:42:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:18 compute-0 sudo[123702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqqgwzckhnntckgokidpiaiebiciiznn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193738.01369-16-74944307341752/AnsiballZ_tempfile.py'
Jan 23 18:42:18 compute-0 sudo[123702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:18 compute-0 python3.9[123704]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 23 18:42:18 compute-0 sudo[123702]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:19 compute-0 ceph-mon[75097]: pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:19 compute-0 sudo[123854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avrngfjvoiwjbigdtiylcxlqplyrrdtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193738.86031-28-280271024777140/AnsiballZ_stat.py'
Jan 23 18:42:19 compute-0 sudo[123854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:19 compute-0 python3.9[123856]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:42:19 compute-0 sudo[123854]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:20 compute-0 sudo[124008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvmijtuctpwihwricvajfljzfgddtopc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193739.7342246-36-51183404550915/AnsiballZ_slurp.py'
Jan 23 18:42:20 compute-0 sudo[124008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:20 compute-0 python3.9[124010]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 23 18:42:20 compute-0 sudo[124008]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:20 compute-0 sshd-session[71264]: Received disconnect from 38.129.56.53 port 54696:11: disconnected by user
Jan 23 18:42:20 compute-0 sshd-session[71264]: Disconnected from user zuul 38.129.56.53 port 54696
Jan 23 18:42:20 compute-0 sshd-session[71261]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:42:20 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 23 18:42:20 compute-0 systemd[1]: session-17.scope: Consumed 1min 46.009s CPU time.
Jan 23 18:42:20 compute-0 systemd-logind[792]: Session 17 logged out. Waiting for processes to exit.
Jan 23 18:42:20 compute-0 systemd-logind[792]: Removed session 17.
Jan 23 18:42:20 compute-0 sudo[124160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfqnxastjithsrcpkbqakdgesziacylh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193740.6065586-44-77116064890214/AnsiballZ_stat.py'
Jan 23 18:42:20 compute-0 sudo[124160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:42:21 compute-0 python3.9[124162]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.8e0w3dx1 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:42:21 compute-0 sudo[124160]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:21 compute-0 ceph-mon[75097]: pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:21 compute-0 sudo[124285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpxmwxnhwjgnugibtjazxzsuagwxiwbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193740.6065586-44-77116064890214/AnsiballZ_copy.py'
Jan 23 18:42:21 compute-0 sudo[124285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:21 compute-0 python3.9[124287]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.8e0w3dx1 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769193740.6065586-44-77116064890214/.source.8e0w3dx1 _original_basename=.n_frngz9 follow=False checksum=479ecb03f80b3050441307329a195b86dab809f9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:42:21 compute-0 sudo[124285]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:22 compute-0 sudo[124437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnlelmacgmyyrzajittvisusvdjsnqbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193742.0007117-59-56874426138327/AnsiballZ_setup.py'
Jan 23 18:42:22 compute-0 sudo[124437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:22 compute-0 python3.9[124439]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:42:22 compute-0 sudo[124437]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:23 compute-0 ceph-mon[75097]: pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:23 compute-0 sudo[124589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrrphnxpdrkvvhkteyeqtjywxafqgfws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193743.2051768-68-76818256750841/AnsiballZ_blockinfile.py'
Jan 23 18:42:23 compute-0 sudo[124589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:23 compute-0 python3.9[124591]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCobDOsKKtrBxxiW11IdJB+DSS7U3BXSBxEvj98QzFsxUb7LT72c/qQyw1FndkcLhf/ByB4jb62eQ2rMtw0t7LYZv8TnCwi/5eP3Hr86L7zpXFmzXd9PB/86i7dB7w5TocA1lSGVXlesybtRng2vdk4NKxmKFVKgAJk7aqIhUlF1ChIqfXa2Ec+MoOQQHBTJ4z2LF6Flc7HGurN1JS4EV7thWcZUo3ZlyiTNXsdofgbHnJDgm9mDvadlIusWxj90mDGG2GKCsThhd4TUOJ2qmruFYlbuao+E3TI29WS2hDLFfSfMri52xY4EdSf4ufNFe8o7Hn0XVT8F7+Ej4MBBWYZuRyi+g+sb6PtPSIdGf+UeJ36olx+bPEosasEAMFykrRuJ9nFga+N8nM8xEqkOeAC6b/cV9QkXrxXnsdwW4hw3qRlgpp3KdzUC04ogUerCZY/ZDY6WkkI7ctYP9C3XBa1365qJffsXZZXs0f+Efmg0zJQL1c9Bpc9LFxygb1rmes=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDBTQDdW9v4CHI+JR/pzChdYLJQc7s/5lZoWJSYnxBiI
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFfSTOePL8EECyCguUZi5cdDHdVP5VkJulKKYHgxi96opql/u9hWMjGIzUS1jDVZLEqQGHVCg+p+I+MX2WiarKk=
                                              create=True mode=0644 path=/tmp/ansible.8e0w3dx1 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:42:23 compute-0 sudo[124589]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:24 compute-0 sudo[124741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvmopbzhycbdspldtwdbciucyekuavph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193744.1068017-76-178139302756728/AnsiballZ_command.py'
Jan 23 18:42:24 compute-0 sudo[124741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:24 compute-0 python3.9[124743]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.8e0w3dx1' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:42:24 compute-0 sudo[124741]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:25 compute-0 ceph-mon[75097]: pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:25 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 23 18:42:25 compute-0 sudo[124897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahorbajqsxagjbwuiuggwwvesqvsiqbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193744.9876797-84-184014483797771/AnsiballZ_file.py'
Jan 23 18:42:25 compute-0 sudo[124897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:25 compute-0 python3.9[124899]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.8e0w3dx1 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:42:25 compute-0 sudo[124897]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:25 compute-0 sshd-session[123552]: Connection closed by 192.168.122.30 port 35060
Jan 23 18:42:25 compute-0 sshd-session[123549]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:42:25 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Jan 23 18:42:25 compute-0 systemd[1]: session-40.scope: Consumed 5.424s CPU time.
Jan 23 18:42:25 compute-0 systemd-logind[792]: Session 40 logged out. Waiting for processes to exit.
Jan 23 18:42:25 compute-0 systemd-logind[792]: Removed session 40.
Jan 23 18:42:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:42:27 compute-0 ceph-mon[75097]: pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:42:28
Jan 23 18:42:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:42:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:42:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', '.mgr', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'images', 'backups']
Jan 23 18:42:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:42:29 compute-0 ceph-mon[75097]: pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:42:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:42:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:42:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:42:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:42:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:42:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:42:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:42:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:42:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:42:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:42:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:42:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:42:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:42:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:42:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:42:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:42:31 compute-0 ceph-mon[75097]: pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:31 compute-0 sshd-session[124924]: Accepted publickey for zuul from 192.168.122.30 port 32858 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:42:31 compute-0 systemd-logind[792]: New session 41 of user zuul.
Jan 23 18:42:31 compute-0 systemd[1]: Started Session 41 of User zuul.
Jan 23 18:42:31 compute-0 sshd-session[124924]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:42:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:32 compute-0 python3.9[125077]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:42:33 compute-0 ceph-mon[75097]: pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:33 compute-0 sudo[125231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqrzkhnhvwfqvtzyruryqxsmeamkrbgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193753.1266031-27-149741622941192/AnsiballZ_systemd.py'
Jan 23 18:42:33 compute-0 sudo[125231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:34 compute-0 python3.9[125233]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 23 18:42:34 compute-0 sudo[125231]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:34 compute-0 ceph-mon[75097]: pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:34 compute-0 sudo[125385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqzobkcpqabymczqiwmxtkfvybrasymq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193754.3819184-35-51953562116963/AnsiballZ_systemd.py'
Jan 23 18:42:34 compute-0 sudo[125385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:35 compute-0 python3.9[125387]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 18:42:35 compute-0 sudo[125385]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:35 compute-0 sudo[125538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpgljszhyvfjmzzhagmvumtuhxbxuczf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193755.3558297-44-141258718058233/AnsiballZ_command.py'
Jan 23 18:42:35 compute-0 sudo[125538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:42:36 compute-0 python3.9[125540]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:42:36 compute-0 sudo[125538]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:36 compute-0 sudo[125691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reonpgzqtptgtjddpblkhveybfzbzmlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193756.3262727-52-18474589648465/AnsiballZ_stat.py'
Jan 23 18:42:36 compute-0 sudo[125691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:37 compute-0 python3.9[125693]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:42:37 compute-0 sudo[125691]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:37 compute-0 ceph-mon[75097]: pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:37 compute-0 sudo[125843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iodxzluhbignvfqcwjjbshvubszobnfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193757.207189-61-267790628845084/AnsiballZ_file.py'
Jan 23 18:42:37 compute-0 sudo[125843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:37 compute-0 python3.9[125845]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:42:37 compute-0 sudo[125843]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:38 compute-0 sshd-session[124927]: Connection closed by 192.168.122.30 port 32858
Jan 23 18:42:38 compute-0 sshd-session[124924]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:42:38 compute-0 systemd-logind[792]: Session 41 logged out. Waiting for processes to exit.
Jan 23 18:42:38 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Jan 23 18:42:38 compute-0 systemd[1]: session-41.scope: Consumed 4.336s CPU time.
Jan 23 18:42:38 compute-0 systemd-logind[792]: Removed session 41.
Jan 23 18:42:39 compute-0 ceph-mon[75097]: pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:42:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:42:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:42:41 compute-0 ceph-mon[75097]: pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:43 compute-0 ceph-mon[75097]: pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:43 compute-0 sshd-session[125870]: Accepted publickey for zuul from 192.168.122.30 port 41032 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:42:43 compute-0 systemd-logind[792]: New session 42 of user zuul.
Jan 23 18:42:43 compute-0 systemd[1]: Started Session 42 of User zuul.
Jan 23 18:42:43 compute-0 sshd-session[125870]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:42:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:44 compute-0 python3.9[126023]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:42:45 compute-0 ceph-mon[75097]: pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:45 compute-0 sudo[126177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itwznlucigopvvyxzdwjzidapeilsrbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193764.9361057-29-173918624718313/AnsiballZ_setup.py'
Jan 23 18:42:45 compute-0 sudo[126177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:45 compute-0 python3.9[126179]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:42:45 compute-0 sudo[126177]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:42:46 compute-0 sudo[126261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmxczqkamcevleflgrsgykdqmexekbxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193764.9361057-29-173918624718313/AnsiballZ_dnf.py'
Jan 23 18:42:46 compute-0 sudo[126261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:42:46 compute-0 python3.9[126263]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 18:42:47 compute-0 ceph-mon[75097]: pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:47 compute-0 sudo[126261]: pam_unix(sudo:session): session closed for user root
Jan 23 18:42:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:48 compute-0 python3.9[126414]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:42:49 compute-0 ceph-mon[75097]: pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:49 compute-0 python3.9[126565]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 18:42:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:50 compute-0 python3.9[126715]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:42:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:42:51 compute-0 ceph-mon[75097]: pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:51 compute-0 python3.9[126865]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:42:51 compute-0 sshd-session[125873]: Connection closed by 192.168.122.30 port 41032
Jan 23 18:42:51 compute-0 sshd-session[125870]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:42:51 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Jan 23 18:42:51 compute-0 systemd[1]: session-42.scope: Consumed 6.442s CPU time.
Jan 23 18:42:51 compute-0 systemd-logind[792]: Session 42 logged out. Waiting for processes to exit.
Jan 23 18:42:51 compute-0 systemd-logind[792]: Removed session 42.
Jan 23 18:42:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:53 compute-0 ceph-mon[75097]: pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:55 compute-0 ceph-mon[75097]: pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:42:57 compute-0 sshd-session[126891]: Accepted publickey for zuul from 192.168.122.30 port 43996 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:42:57 compute-0 systemd-logind[792]: New session 43 of user zuul.
Jan 23 18:42:57 compute-0 systemd[1]: Started Session 43 of User zuul.
Jan 23 18:42:57 compute-0 ceph-mon[75097]: pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:57 compute-0 sshd-session[126891]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:42:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:58 compute-0 python3.9[127044]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:42:59 compute-0 ceph-mon[75097]: pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:42:59 compute-0 sudo[127198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnwlcbqnaghtgfxmuqmgouqgmmuopwph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193779.4453752-45-47988954299041/AnsiballZ_file.py'
Jan 23 18:42:59 compute-0 sudo[127198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:43:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:43:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:43:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:43:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:43:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:43:00 compute-0 python3.9[127200]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:43:00 compute-0 sudo[127198]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:00 compute-0 sudo[127350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgegobmgoajkarpvrcwzuivmnsfmzcsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193780.3198192-45-73241488254585/AnsiballZ_file.py'
Jan 23 18:43:00 compute-0 sudo[127350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:00 compute-0 python3.9[127352]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:43:00 compute-0 sudo[127350]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:43:01 compute-0 ceph-mon[75097]: pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:01 compute-0 sudo[127502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xharkbshmbsahbzmbvlhldfbrwxblswd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193781.0710819-60-251992482516854/AnsiballZ_stat.py'
Jan 23 18:43:01 compute-0 sudo[127502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:01 compute-0 python3.9[127504]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:01 compute-0 sudo[127502]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:02 compute-0 sudo[127625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scxxewphkbmfibzbwqpvpqzuihkxhgjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193781.0710819-60-251992482516854/AnsiballZ_copy.py'
Jan 23 18:43:02 compute-0 sudo[127625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:02 compute-0 python3.9[127627]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193781.0710819-60-251992482516854/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=11d01d431e61bde29122a61504c43056e33ce3aa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:02 compute-0 sudo[127625]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:02 compute-0 sudo[127777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkrvghnlfxnbyurqivhnpoijcvqtgaxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193782.6458185-60-196125312751940/AnsiballZ_stat.py'
Jan 23 18:43:02 compute-0 sudo[127777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:03 compute-0 python3.9[127779]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:03 compute-0 sudo[127777]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:03 compute-0 ceph-mon[75097]: pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:03 compute-0 sudo[127900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-todoyptfpcqzgmuofmgtepdrkfabjffy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193782.6458185-60-196125312751940/AnsiballZ_copy.py'
Jan 23 18:43:03 compute-0 sudo[127900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:03 compute-0 python3.9[127902]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193782.6458185-60-196125312751940/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d027c0af7c6c820814db5c6b8e653fa5da33d321 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:03 compute-0 sudo[127900]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:04 compute-0 sudo[128052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hohxvpnhrccvdzuzumfudkwwqppwbhbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193783.8774595-60-246461977082876/AnsiballZ_stat.py'
Jan 23 18:43:04 compute-0 sudo[128052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:04 compute-0 python3.9[128054]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:04 compute-0 sudo[128052]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:04 compute-0 sudo[128175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ittulmirftqpyuhnwfxcamyhmkrrysnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193783.8774595-60-246461977082876/AnsiballZ_copy.py'
Jan 23 18:43:04 compute-0 sudo[128175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:04 compute-0 python3.9[128177]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193783.8774595-60-246461977082876/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=1a09ae4976f1d173f833a08750ca59b614647ad4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:04 compute-0 sudo[128175]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:05 compute-0 ceph-mon[75097]: pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:05 compute-0 sudo[128327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-behuskisxkpwcpiymiykrvvxjmxefcou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193785.1385918-104-181094746636725/AnsiballZ_file.py'
Jan 23 18:43:05 compute-0 sudo[128327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:05 compute-0 python3.9[128329]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:43:05 compute-0 sudo[128327]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:43:06 compute-0 sudo[128479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntcavajipzxzulkrylpjqwfhdnwpseaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193785.8487432-104-141487125302912/AnsiballZ_file.py'
Jan 23 18:43:06 compute-0 sudo[128479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:06 compute-0 python3.9[128481]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:43:06 compute-0 sudo[128479]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:06 compute-0 sudo[128631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wljkiitopdltqnofoofvwfcfqpfhhvqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193786.563123-119-241006454449551/AnsiballZ_stat.py'
Jan 23 18:43:06 compute-0 sudo[128631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:07 compute-0 python3.9[128633]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:07 compute-0 sudo[128631]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:07 compute-0 ceph-mon[75097]: pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:07 compute-0 sudo[128754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thywjlctulhhqprqfvubgtnwboernaqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193786.563123-119-241006454449551/AnsiballZ_copy.py'
Jan 23 18:43:07 compute-0 sudo[128754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:07 compute-0 python3.9[128756]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193786.563123-119-241006454449551/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d2a5abc3baf81f60280f74d3d3f01e6c23e16902 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:07 compute-0 sudo[128754]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:08 compute-0 sudo[128906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khcxlsanafweuqgfseezlvgkiaejmmtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193787.7909517-119-16640851161723/AnsiballZ_stat.py'
Jan 23 18:43:08 compute-0 sudo[128906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:08 compute-0 python3.9[128908]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:08 compute-0 sudo[128906]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:08 compute-0 sudo[129029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aelndceugrtnagcmdacjleuvgnpaebun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193787.7909517-119-16640851161723/AnsiballZ_copy.py'
Jan 23 18:43:08 compute-0 sudo[129029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:09 compute-0 python3.9[129031]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193787.7909517-119-16640851161723/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=af96e1d64570d1524997bf549b5168c40d0380ff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:09 compute-0 sudo[129029]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:09 compute-0 ceph-mon[75097]: pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:09 compute-0 sudo[129181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duxhxqztgjlocvwxbsiragagxmqgtrau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193789.2530243-119-243232825765539/AnsiballZ_stat.py'
Jan 23 18:43:09 compute-0 sudo[129181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:09 compute-0 python3.9[129183]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:09 compute-0 sudo[129181]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:10 compute-0 sudo[129304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyrgdqeonqqptxpjdqdnirwwuxracvta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193789.2530243-119-243232825765539/AnsiballZ_copy.py'
Jan 23 18:43:10 compute-0 sudo[129304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:10 compute-0 python3.9[129306]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193789.2530243-119-243232825765539/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=51dbe0218e0b279f27978c70ce7327aaccd9a4cb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:10 compute-0 ceph-mon[75097]: pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:10 compute-0 sudo[129304]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:10 compute-0 sudo[129456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xngyahfeigolmsujddcyskojvhwfwqek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193790.5289536-163-232463253469402/AnsiballZ_file.py'
Jan 23 18:43:10 compute-0 sudo[129456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:11 compute-0 python3.9[129458]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:43:11 compute-0 sudo[129456]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:43:11 compute-0 sudo[129608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfgggucvgalqngbzjiiddacaxjxvfiua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193791.217798-163-46457331115115/AnsiballZ_file.py'
Jan 23 18:43:11 compute-0 sudo[129608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:11 compute-0 python3.9[129610]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:43:11 compute-0 sudo[129608]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:12 compute-0 sudo[129760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tugtmbeqzqakakqhbtusyiirjefvfuty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193791.9486208-178-270384267809653/AnsiballZ_stat.py'
Jan 23 18:43:12 compute-0 sudo[129760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:12 compute-0 python3.9[129762]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:12 compute-0 sudo[129760]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:12 compute-0 sudo[129883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihgqjgsxhokmooqivavvkyaxuyxfadnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193791.9486208-178-270384267809653/AnsiballZ_copy.py'
Jan 23 18:43:12 compute-0 sudo[129883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:13 compute-0 python3.9[129885]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193791.9486208-178-270384267809653/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=5c148051d901009b8822c4054c1bf3ab1f48dfb9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:13 compute-0 sudo[129883]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:13 compute-0 ceph-mon[75097]: pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:13 compute-0 sudo[130035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cimvwnszfpxxllshfcqdanoqcxbtbwwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193793.2058647-178-222470046313712/AnsiballZ_stat.py'
Jan 23 18:43:13 compute-0 sudo[130035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:13 compute-0 python3.9[130037]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:13 compute-0 sudo[130035]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:14 compute-0 sudo[130158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmaykgsdpgrcshoqmsandecrdvvbpjmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193793.2058647-178-222470046313712/AnsiballZ_copy.py'
Jan 23 18:43:14 compute-0 sudo[130158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:14 compute-0 python3.9[130160]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193793.2058647-178-222470046313712/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=af96e1d64570d1524997bf549b5168c40d0380ff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:14 compute-0 sudo[130158]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:14 compute-0 sudo[130310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdbgiaknafxxqpmawfnibajdhnchmnfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193794.382967-178-37411603245125/AnsiballZ_stat.py'
Jan 23 18:43:14 compute-0 sudo[130310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:14 compute-0 python3.9[130312]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:14 compute-0 sudo[130310]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:15 compute-0 ceph-mon[75097]: pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:15 compute-0 sudo[130404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:43:15 compute-0 sudo[130404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:43:15 compute-0 sudo[130404]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:15 compute-0 sudo[130462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dagqnypbngvuldmxojmfzkinxllnnmmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193794.382967-178-37411603245125/AnsiballZ_copy.py'
Jan 23 18:43:15 compute-0 sudo[130462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:15 compute-0 sudo[130456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:43:15 compute-0 sudo[130456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:43:15 compute-0 python3.9[130483]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193794.382967-178-37411603245125/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=83170c2342ef79b90a180e6e4faa76a6ca88a489 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:15 compute-0 sudo[130462]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:15 compute-0 sudo[130456]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:43:15 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:43:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:43:15 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:43:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:43:15 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:43:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:43:15 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:43:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:43:15 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:43:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:43:15 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:43:15 compute-0 sudo[130541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:43:15 compute-0 sudo[130541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:43:16 compute-0 sudo[130541]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:16 compute-0 sudo[130566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:43:16 compute-0 sudo[130566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:43:16 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:43:16 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:43:16 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:43:16 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:43:16 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:43:16 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:43:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:43:16 compute-0 podman[130626]: 2026-01-23 18:43:16.338377506 +0000 UTC m=+0.055753900 container create a146cb0437d38d97e6d0db887e63cbf6618aa0f82a3b8b04429835e76281fdf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 18:43:16 compute-0 systemd[1]: Started libpod-conmon-a146cb0437d38d97e6d0db887e63cbf6618aa0f82a3b8b04429835e76281fdf2.scope.
Jan 23 18:43:16 compute-0 podman[130626]: 2026-01-23 18:43:16.309453753 +0000 UTC m=+0.026830227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:43:16 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:43:16 compute-0 podman[130626]: 2026-01-23 18:43:16.43276713 +0000 UTC m=+0.150143584 container init a146cb0437d38d97e6d0db887e63cbf6618aa0f82a3b8b04429835e76281fdf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hermann, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:43:16 compute-0 podman[130626]: 2026-01-23 18:43:16.444418895 +0000 UTC m=+0.161795319 container start a146cb0437d38d97e6d0db887e63cbf6618aa0f82a3b8b04429835e76281fdf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:43:16 compute-0 podman[130626]: 2026-01-23 18:43:16.448872507 +0000 UTC m=+0.166248991 container attach a146cb0437d38d97e6d0db887e63cbf6618aa0f82a3b8b04429835e76281fdf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 23 18:43:16 compute-0 optimistic_hermann[130643]: 167 167
Jan 23 18:43:16 compute-0 systemd[1]: libpod-a146cb0437d38d97e6d0db887e63cbf6618aa0f82a3b8b04429835e76281fdf2.scope: Deactivated successfully.
Jan 23 18:43:16 compute-0 podman[130626]: 2026-01-23 18:43:16.454789556 +0000 UTC m=+0.172165990 container died a146cb0437d38d97e6d0db887e63cbf6618aa0f82a3b8b04429835e76281fdf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:43:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6cbcd8c55069cf5a08608b79f52dbfd7fe8c9e8955afe22ab8b6fe50dcf17b8-merged.mount: Deactivated successfully.
Jan 23 18:43:16 compute-0 podman[130626]: 2026-01-23 18:43:16.508673661 +0000 UTC m=+0.226050055 container remove a146cb0437d38d97e6d0db887e63cbf6618aa0f82a3b8b04429835e76281fdf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 18:43:16 compute-0 systemd[1]: libpod-conmon-a146cb0437d38d97e6d0db887e63cbf6618aa0f82a3b8b04429835e76281fdf2.scope: Deactivated successfully.
Jan 23 18:43:16 compute-0 podman[130668]: 2026-01-23 18:43:16.74366975 +0000 UTC m=+0.063407433 container create 1c1dfac9171b287e926d9476ade6fb8a59ec1e0976590a250ae6abf7b87db9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_galileo, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 23 18:43:16 compute-0 systemd[1]: Started libpod-conmon-1c1dfac9171b287e926d9476ade6fb8a59ec1e0976590a250ae6abf7b87db9a2.scope.
Jan 23 18:43:16 compute-0 podman[130668]: 2026-01-23 18:43:16.719904851 +0000 UTC m=+0.039642544 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:43:16 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b42f645789593254532e4ee02d40c851580f239c8e446ecb2f20391d3210db8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b42f645789593254532e4ee02d40c851580f239c8e446ecb2f20391d3210db8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b42f645789593254532e4ee02d40c851580f239c8e446ecb2f20391d3210db8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b42f645789593254532e4ee02d40c851580f239c8e446ecb2f20391d3210db8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b42f645789593254532e4ee02d40c851580f239c8e446ecb2f20391d3210db8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:43:16 compute-0 podman[130668]: 2026-01-23 18:43:16.85850949 +0000 UTC m=+0.178247203 container init 1c1dfac9171b287e926d9476ade6fb8a59ec1e0976590a250ae6abf7b87db9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:43:16 compute-0 podman[130668]: 2026-01-23 18:43:16.871950897 +0000 UTC m=+0.191688580 container start 1c1dfac9171b287e926d9476ade6fb8a59ec1e0976590a250ae6abf7b87db9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_galileo, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:43:16 compute-0 podman[130668]: 2026-01-23 18:43:16.87641462 +0000 UTC m=+0.196152363 container attach 1c1dfac9171b287e926d9476ade6fb8a59ec1e0976590a250ae6abf7b87db9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_galileo, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 18:43:17 compute-0 ceph-mon[75097]: pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:17 compute-0 infallible_galileo[130684]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:43:17 compute-0 infallible_galileo[130684]: --> All data devices are unavailable
Jan 23 18:43:17 compute-0 systemd[1]: libpod-1c1dfac9171b287e926d9476ade6fb8a59ec1e0976590a250ae6abf7b87db9a2.scope: Deactivated successfully.
Jan 23 18:43:17 compute-0 podman[130704]: 2026-01-23 18:43:17.517677579 +0000 UTC m=+0.029974665 container died 1c1dfac9171b287e926d9476ade6fb8a59ec1e0976590a250ae6abf7b87db9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:43:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b42f645789593254532e4ee02d40c851580f239c8e446ecb2f20391d3210db8-merged.mount: Deactivated successfully.
Jan 23 18:43:17 compute-0 podman[130704]: 2026-01-23 18:43:17.566458299 +0000 UTC m=+0.078755385 container remove 1c1dfac9171b287e926d9476ade6fb8a59ec1e0976590a250ae6abf7b87db9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_galileo, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 23 18:43:17 compute-0 systemd[1]: libpod-conmon-1c1dfac9171b287e926d9476ade6fb8a59ec1e0976590a250ae6abf7b87db9a2.scope: Deactivated successfully.
Jan 23 18:43:17 compute-0 sudo[130566]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:17 compute-0 sudo[130721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:43:17 compute-0 sudo[130721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:43:17 compute-0 sudo[130721]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:17 compute-0 sudo[130746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:43:17 compute-0 sudo[130746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:43:18 compute-0 podman[130781]: 2026-01-23 18:43:18.022517739 +0000 UTC m=+0.038711676 container create 1f51204d8d2dc1ee258d8c0d986ae202c7b63be0ad343ea16aedd2313e6b487c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 18:43:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:18 compute-0 systemd[1]: Started libpod-conmon-1f51204d8d2dc1ee258d8c0d986ae202c7b63be0ad343ea16aedd2313e6b487c.scope.
Jan 23 18:43:18 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:43:18 compute-0 podman[130781]: 2026-01-23 18:43:18.006017204 +0000 UTC m=+0.022211141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:43:18 compute-0 podman[130781]: 2026-01-23 18:43:18.117013024 +0000 UTC m=+0.133206961 container init 1f51204d8d2dc1ee258d8c0d986ae202c7b63be0ad343ea16aedd2313e6b487c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 23 18:43:18 compute-0 podman[130781]: 2026-01-23 18:43:18.129012055 +0000 UTC m=+0.145206012 container start 1f51204d8d2dc1ee258d8c0d986ae202c7b63be0ad343ea16aedd2313e6b487c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 23 18:43:18 compute-0 mystifying_wing[130797]: 167 167
Jan 23 18:43:18 compute-0 podman[130781]: 2026-01-23 18:43:18.134487416 +0000 UTC m=+0.150681373 container attach 1f51204d8d2dc1ee258d8c0d986ae202c7b63be0ad343ea16aedd2313e6b487c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 18:43:18 compute-0 systemd[1]: libpod-1f51204d8d2dc1ee258d8c0d986ae202c7b63be0ad343ea16aedd2313e6b487c.scope: Deactivated successfully.
Jan 23 18:43:18 compute-0 podman[130781]: 2026-01-23 18:43:18.135330332 +0000 UTC m=+0.151524289 container died 1f51204d8d2dc1ee258d8c0d986ae202c7b63be0ad343ea16aedd2313e6b487c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 23 18:43:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-9586c92ebd6938ee7bd108461828c66f02bf775b9c02e6c49d86d2252d76ddeb-merged.mount: Deactivated successfully.
Jan 23 18:43:18 compute-0 podman[130781]: 2026-01-23 18:43:18.188219208 +0000 UTC m=+0.204413165 container remove 1f51204d8d2dc1ee258d8c0d986ae202c7b63be0ad343ea16aedd2313e6b487c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:43:18 compute-0 systemd[1]: libpod-conmon-1f51204d8d2dc1ee258d8c0d986ae202c7b63be0ad343ea16aedd2313e6b487c.scope: Deactivated successfully.
Jan 23 18:43:18 compute-0 podman[130873]: 2026-01-23 18:43:18.376980693 +0000 UTC m=+0.047034289 container create 94fbe4c1a13ab9b4665440714bb5a356111ea9ae71923ad355ed626cc9803914 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:43:18 compute-0 systemd[1]: Started libpod-conmon-94fbe4c1a13ab9b4665440714bb5a356111ea9ae71923ad355ed626cc9803914.scope.
Jan 23 18:43:18 compute-0 podman[130873]: 2026-01-23 18:43:18.356480495 +0000 UTC m=+0.026534141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:43:18 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b0da3af06cfa4de89220ca838f4b3f3cbeaf7bf369577df6358b26381f35c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b0da3af06cfa4de89220ca838f4b3f3cbeaf7bf369577df6358b26381f35c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b0da3af06cfa4de89220ca838f4b3f3cbeaf7bf369577df6358b26381f35c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b0da3af06cfa4de89220ca838f4b3f3cbeaf7bf369577df6358b26381f35c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:43:18 compute-0 podman[130873]: 2026-01-23 18:43:18.518021817 +0000 UTC m=+0.188075493 container init 94fbe4c1a13ab9b4665440714bb5a356111ea9ae71923ad355ed626cc9803914 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 23 18:43:18 compute-0 sudo[130942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nilsiaplxyapofnxfvoglzzxqeuwalhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193796.2274425-238-233915920905763/AnsiballZ_file.py'
Jan 23 18:43:18 compute-0 podman[130873]: 2026-01-23 18:43:18.534317069 +0000 UTC m=+0.204370695 container start 94fbe4c1a13ab9b4665440714bb5a356111ea9ae71923ad355ed626cc9803914 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banach, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:43:18 compute-0 sudo[130942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:18 compute-0 podman[130873]: 2026-01-23 18:43:18.539265619 +0000 UTC m=+0.209319245 container attach 94fbe4c1a13ab9b4665440714bb5a356111ea9ae71923ad355ed626cc9803914 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banach, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 23 18:43:18 compute-0 python3.9[130946]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:43:18 compute-0 sudo[130942]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:18 compute-0 gifted_banach[130913]: {
Jan 23 18:43:18 compute-0 gifted_banach[130913]:     "0": [
Jan 23 18:43:18 compute-0 gifted_banach[130913]:         {
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "devices": [
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "/dev/loop3"
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             ],
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "lv_name": "ceph_lv0",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "lv_size": "21470642176",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "name": "ceph_lv0",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "tags": {
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.cluster_name": "ceph",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.crush_device_class": "",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.encrypted": "0",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.objectstore": "bluestore",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.osd_id": "0",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.type": "block",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.vdo": "0",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.with_tpm": "0"
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             },
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "type": "block",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "vg_name": "ceph_vg0"
Jan 23 18:43:18 compute-0 gifted_banach[130913]:         }
Jan 23 18:43:18 compute-0 gifted_banach[130913]:     ],
Jan 23 18:43:18 compute-0 gifted_banach[130913]:     "1": [
Jan 23 18:43:18 compute-0 gifted_banach[130913]:         {
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "devices": [
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "/dev/loop4"
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             ],
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "lv_name": "ceph_lv1",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "lv_size": "21470642176",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "name": "ceph_lv1",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "tags": {
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.cluster_name": "ceph",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.crush_device_class": "",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.encrypted": "0",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.objectstore": "bluestore",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.osd_id": "1",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.type": "block",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.vdo": "0",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.with_tpm": "0"
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             },
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "type": "block",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "vg_name": "ceph_vg1"
Jan 23 18:43:18 compute-0 gifted_banach[130913]:         }
Jan 23 18:43:18 compute-0 gifted_banach[130913]:     ],
Jan 23 18:43:18 compute-0 gifted_banach[130913]:     "2": [
Jan 23 18:43:18 compute-0 gifted_banach[130913]:         {
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "devices": [
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "/dev/loop5"
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             ],
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "lv_name": "ceph_lv2",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "lv_size": "21470642176",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "name": "ceph_lv2",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "tags": {
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.cluster_name": "ceph",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.crush_device_class": "",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.encrypted": "0",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.objectstore": "bluestore",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.osd_id": "2",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.type": "block",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.vdo": "0",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:                 "ceph.with_tpm": "0"
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             },
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "type": "block",
Jan 23 18:43:18 compute-0 gifted_banach[130913]:             "vg_name": "ceph_vg2"
Jan 23 18:43:18 compute-0 gifted_banach[130913]:         }
Jan 23 18:43:18 compute-0 gifted_banach[130913]:     ]
Jan 23 18:43:18 compute-0 gifted_banach[130913]: }
Jan 23 18:43:18 compute-0 systemd[1]: libpod-94fbe4c1a13ab9b4665440714bb5a356111ea9ae71923ad355ed626cc9803914.scope: Deactivated successfully.
Jan 23 18:43:18 compute-0 podman[130873]: 2026-01-23 18:43:18.850346502 +0000 UTC m=+0.520400108 container died 94fbe4c1a13ab9b4665440714bb5a356111ea9ae71923ad355ed626cc9803914 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 23 18:43:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0b0da3af06cfa4de89220ca838f4b3f3cbeaf7bf369577df6358b26381f35c5-merged.mount: Deactivated successfully.
Jan 23 18:43:18 compute-0 podman[130873]: 2026-01-23 18:43:18.904316029 +0000 UTC m=+0.574369655 container remove 94fbe4c1a13ab9b4665440714bb5a356111ea9ae71923ad355ed626cc9803914 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:43:18 compute-0 systemd[1]: libpod-conmon-94fbe4c1a13ab9b4665440714bb5a356111ea9ae71923ad355ed626cc9803914.scope: Deactivated successfully.
Jan 23 18:43:18 compute-0 sudo[130746]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:19 compute-0 sudo[131005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:43:19 compute-0 sudo[131005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:43:19 compute-0 sudo[131005]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:19 compute-0 sudo[131064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:43:19 compute-0 sudo[131064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:43:19 compute-0 ceph-mon[75097]: pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:19 compute-0 sudo[131162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyjjwgohnwkyokjufttdpeetcyhterqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193798.997727-246-85803098051135/AnsiballZ_stat.py'
Jan 23 18:43:19 compute-0 sudo[131162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:19 compute-0 podman[131177]: 2026-01-23 18:43:19.414535658 +0000 UTC m=+0.054523867 container create 5c923ac86d4c2889d780e514b1aa35a1f1e6a791ec1113ad6ebde2f0d1c2f0ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:43:19 compute-0 systemd[1]: Started libpod-conmon-5c923ac86d4c2889d780e514b1aa35a1f1e6a791ec1113ad6ebde2f0d1c2f0ae.scope.
Jan 23 18:43:19 compute-0 python3.9[131164]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:19 compute-0 podman[131177]: 2026-01-23 18:43:19.384460394 +0000 UTC m=+0.024448703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:43:19 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:43:19 compute-0 sudo[131162]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:19 compute-0 podman[131177]: 2026-01-23 18:43:19.512441626 +0000 UTC m=+0.152429915 container init 5c923ac86d4c2889d780e514b1aa35a1f1e6a791ec1113ad6ebde2f0d1c2f0ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:43:19 compute-0 podman[131177]: 2026-01-23 18:43:19.519624498 +0000 UTC m=+0.159612707 container start 5c923ac86d4c2889d780e514b1aa35a1f1e6a791ec1113ad6ebde2f0d1c2f0ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 23 18:43:19 compute-0 priceless_lamport[131193]: 167 167
Jan 23 18:43:19 compute-0 systemd[1]: libpod-5c923ac86d4c2889d780e514b1aa35a1f1e6a791ec1113ad6ebde2f0d1c2f0ae.scope: Deactivated successfully.
Jan 23 18:43:19 compute-0 podman[131177]: 2026-01-23 18:43:19.523750115 +0000 UTC m=+0.163738414 container attach 5c923ac86d4c2889d780e514b1aa35a1f1e6a791ec1113ad6ebde2f0d1c2f0ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:43:19 compute-0 podman[131177]: 2026-01-23 18:43:19.52512309 +0000 UTC m=+0.165111359 container died 5c923ac86d4c2889d780e514b1aa35a1f1e6a791ec1113ad6ebde2f0d1c2f0ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lamport, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 23 18:43:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2734eea5a4448acb92c0056872dd7d17dc28e4cc4bd13e877fa44b266433963c-merged.mount: Deactivated successfully.
Jan 23 18:43:19 compute-0 podman[131177]: 2026-01-23 18:43:19.574719716 +0000 UTC m=+0.214707935 container remove 5c923ac86d4c2889d780e514b1aa35a1f1e6a791ec1113ad6ebde2f0d1c2f0ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:43:19 compute-0 systemd[1]: libpod-conmon-5c923ac86d4c2889d780e514b1aa35a1f1e6a791ec1113ad6ebde2f0d1c2f0ae.scope: Deactivated successfully.
Jan 23 18:43:19 compute-0 podman[131263]: 2026-01-23 18:43:19.727621159 +0000 UTC m=+0.044339210 container create 879de7f682f359fce2fdff8bd3590c90f67023b5319506f076382ae129441c39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:43:19 compute-0 systemd[1]: Started libpod-conmon-879de7f682f359fce2fdff8bd3590c90f67023b5319506f076382ae129441c39.scope.
Jan 23 18:43:19 compute-0 podman[131263]: 2026-01-23 18:43:19.707858964 +0000 UTC m=+0.024577035 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:43:19 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:43:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7938a47c72270a94503b6f2f64e441ae9e1c65e1ec96ea49e8e6261b067734cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:43:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7938a47c72270a94503b6f2f64e441ae9e1c65e1ec96ea49e8e6261b067734cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:43:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7938a47c72270a94503b6f2f64e441ae9e1c65e1ec96ea49e8e6261b067734cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:43:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7938a47c72270a94503b6f2f64e441ae9e1c65e1ec96ea49e8e6261b067734cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:43:19 compute-0 podman[131263]: 2026-01-23 18:43:19.831836253 +0000 UTC m=+0.148554344 container init 879de7f682f359fce2fdff8bd3590c90f67023b5319506f076382ae129441c39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_galois, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 23 18:43:19 compute-0 podman[131263]: 2026-01-23 18:43:19.843353416 +0000 UTC m=+0.160071477 container start 879de7f682f359fce2fdff8bd3590c90f67023b5319506f076382ae129441c39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:43:19 compute-0 podman[131263]: 2026-01-23 18:43:19.849144063 +0000 UTC m=+0.165862134 container attach 879de7f682f359fce2fdff8bd3590c90f67023b5319506f076382ae129441c39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_galois, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:43:19 compute-0 sudo[131357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sliegexxhvtlbpcyadrusrewyjavdayb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193798.997727-246-85803098051135/AnsiballZ_copy.py'
Jan 23 18:43:19 compute-0 sudo[131357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:20 compute-0 python3.9[131359]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193798.997727-246-85803098051135/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ad138ff1d14b381863f6932c876cfdf53601960 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:20 compute-0 sudo[131357]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:20 compute-0 lvm[131566]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:43:20 compute-0 lvm[131566]: VG ceph_vg0 finished
Jan 23 18:43:20 compute-0 lvm[131567]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:43:20 compute-0 lvm[131567]: VG ceph_vg1 finished
Jan 23 18:43:20 compute-0 lvm[131585]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:43:20 compute-0 lvm[131585]: VG ceph_vg2 finished
Jan 23 18:43:20 compute-0 sudo[131586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khyuzuyntglwyisbbfyvryamtkiowudl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193800.2595391-262-224844314430678/AnsiballZ_file.py'
Jan 23 18:43:20 compute-0 sudo[131586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:20 compute-0 lvm[131588]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:43:20 compute-0 lvm[131588]: VG ceph_vg0 finished
Jan 23 18:43:20 compute-0 ecstatic_galois[131309]: {}
Jan 23 18:43:20 compute-0 systemd[1]: libpod-879de7f682f359fce2fdff8bd3590c90f67023b5319506f076382ae129441c39.scope: Deactivated successfully.
Jan 23 18:43:20 compute-0 systemd[1]: libpod-879de7f682f359fce2fdff8bd3590c90f67023b5319506f076382ae129441c39.scope: Consumed 1.310s CPU time.
Jan 23 18:43:20 compute-0 podman[131263]: 2026-01-23 18:43:20.632902852 +0000 UTC m=+0.949620923 container died 879de7f682f359fce2fdff8bd3590c90f67023b5319506f076382ae129441c39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_galois, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 18:43:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7938a47c72270a94503b6f2f64e441ae9e1c65e1ec96ea49e8e6261b067734cc-merged.mount: Deactivated successfully.
Jan 23 18:43:20 compute-0 podman[131263]: 2026-01-23 18:43:20.684504424 +0000 UTC m=+1.001222565 container remove 879de7f682f359fce2fdff8bd3590c90f67023b5319506f076382ae129441c39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_galois, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 18:43:20 compute-0 systemd[1]: libpod-conmon-879de7f682f359fce2fdff8bd3590c90f67023b5319506f076382ae129441c39.scope: Deactivated successfully.
Jan 23 18:43:20 compute-0 sudo[131064]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:20 compute-0 python3.9[131589]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:43:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:43:20 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:43:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:43:20 compute-0 sudo[131586]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:20 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:43:20 compute-0 sudo[131604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:43:20 compute-0 sudo[131604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:43:20 compute-0 sudo[131604]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:43:21 compute-0 ceph-mon[75097]: pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:43:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:43:21 compute-0 sudo[131778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abnxlvtmmyzdublchxmaslxhcnwtafwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193800.921353-270-193991303908911/AnsiballZ_stat.py'
Jan 23 18:43:21 compute-0 sudo[131778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:21 compute-0 python3.9[131780]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:21 compute-0 sudo[131778]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:21 compute-0 sudo[131901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seyehmmcatyyabrfqrygviskccfkvkgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193800.921353-270-193991303908911/AnsiballZ_copy.py'
Jan 23 18:43:21 compute-0 sudo[131901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:22 compute-0 python3.9[131903]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193800.921353-270-193991303908911/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ad138ff1d14b381863f6932c876cfdf53601960 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:22 compute-0 sudo[131901]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:22 compute-0 sudo[132053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjvwwlkiocoabhzjmzzlunafyanhaycw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193802.2990508-286-139394189955745/AnsiballZ_file.py'
Jan 23 18:43:22 compute-0 sudo[132053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:22 compute-0 python3.9[132055]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:43:22 compute-0 sudo[132053]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:23 compute-0 ceph-mon[75097]: pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:23 compute-0 sudo[132205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awtcftcmexwqfesqzhfltruyvpanauen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193803.0275064-294-81987033911144/AnsiballZ_stat.py'
Jan 23 18:43:23 compute-0 sudo[132205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:23 compute-0 python3.9[132207]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:23 compute-0 sudo[132205]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:23 compute-0 sudo[132328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsjqsgjjvlhxntzcnfnfsbldrddxlgdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193803.0275064-294-81987033911144/AnsiballZ_copy.py'
Jan 23 18:43:23 compute-0 sudo[132328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:24 compute-0 python3.9[132330]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193803.0275064-294-81987033911144/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ad138ff1d14b381863f6932c876cfdf53601960 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:24 compute-0 sudo[132328]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:24 compute-0 sudo[132480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndjucskdoselpuhrtfkmmbonxlhnvwva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193804.3095717-310-3051843369599/AnsiballZ_file.py'
Jan 23 18:43:24 compute-0 sudo[132480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:24 compute-0 python3.9[132482]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:43:24 compute-0 sudo[132480]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:25 compute-0 ceph-mon[75097]: pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:25 compute-0 sudo[132632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxlazvlhreuilclzwmgqfcagmecxpjcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193805.0218277-318-218525340678626/AnsiballZ_stat.py'
Jan 23 18:43:25 compute-0 sudo[132632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:25 compute-0 python3.9[132634]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:25 compute-0 sudo[132632]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:25 compute-0 sudo[132755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlmddwzlktowzwlrgpheyzjiqkrzagma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193805.0218277-318-218525340678626/AnsiballZ_copy.py'
Jan 23 18:43:25 compute-0 sudo[132755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:43:26 compute-0 python3.9[132757]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193805.0218277-318-218525340678626/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ad138ff1d14b381863f6932c876cfdf53601960 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:26 compute-0 sudo[132755]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:26 compute-0 sudo[132907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baunulimlbrnfxgcgnqxlvqftkjgwzyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193806.4045434-334-134362807962393/AnsiballZ_file.py'
Jan 23 18:43:26 compute-0 sudo[132907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:26 compute-0 python3.9[132909]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:43:26 compute-0 sudo[132907]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:27 compute-0 ceph-mon[75097]: pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:27 compute-0 sudo[133059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzrqbsjgggjlvwaaxmqjgdclngoqmzbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193807.0942945-342-279810845839824/AnsiballZ_stat.py'
Jan 23 18:43:27 compute-0 sudo[133059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:27 compute-0 python3.9[133061]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:27 compute-0 sudo[133059]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:28 compute-0 sudo[133182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsfqfiiahivwhlnzjrzudzyxqcbwvbuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193807.0942945-342-279810845839824/AnsiballZ_copy.py'
Jan 23 18:43:28 compute-0 sudo[133182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:28 compute-0 python3.9[133184]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193807.0942945-342-279810845839824/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ad138ff1d14b381863f6932c876cfdf53601960 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:28 compute-0 sudo[133182]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:28 compute-0 sudo[133334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvntzawdikgpfqbrcyycmnizpodmbkne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193808.589356-358-116520109937683/AnsiballZ_file.py'
Jan 23 18:43:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:43:28
Jan 23 18:43:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:43:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:43:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', '.mgr', 'backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.control', 'vms', 'images', 'default.rgw.meta']
Jan 23 18:43:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:43:28 compute-0 sudo[133334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:29 compute-0 python3.9[133336]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:43:29 compute-0 sudo[133334]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:29 compute-0 ceph-mon[75097]: pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:29 compute-0 sudo[133486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvtvhvyhgjxnxcvkppilsrhlvdbxbphg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193809.2782142-366-78505105249450/AnsiballZ_stat.py'
Jan 23 18:43:29 compute-0 sudo[133486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:29 compute-0 python3.9[133488]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:29 compute-0 sudo[133486]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:43:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:43:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:43:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:43:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:43:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:43:30 compute-0 sudo[133609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pchycqloirvqkzjihhwomnuuibqykzkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193809.2782142-366-78505105249450/AnsiballZ_copy.py'
Jan 23 18:43:30 compute-0 sudo[133609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:30 compute-0 python3.9[133611]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193809.2782142-366-78505105249450/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ad138ff1d14b381863f6932c876cfdf53601960 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:30 compute-0 sudo[133609]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:30 compute-0 sshd-session[126894]: Connection closed by 192.168.122.30 port 43996
Jan 23 18:43:30 compute-0 sshd-session[126891]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:43:30 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Jan 23 18:43:30 compute-0 systemd[1]: session-43.scope: Consumed 25.119s CPU time.
Jan 23 18:43:30 compute-0 systemd-logind[792]: Session 43 logged out. Waiting for processes to exit.
Jan 23 18:43:30 compute-0 systemd-logind[792]: Removed session 43.
Jan 23 18:43:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:43:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:43:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:43:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:43:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:43:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:43:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:43:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:43:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:43:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:43:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:43:31.126636) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193811126710, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1421, "num_deletes": 252, "total_data_size": 2005146, "memory_usage": 2029496, "flush_reason": "Manual Compaction"}
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193811143513, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1188450, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7427, "largest_seqno": 8847, "table_properties": {"data_size": 1183580, "index_size": 2074, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13888, "raw_average_key_size": 20, "raw_value_size": 1172263, "raw_average_value_size": 1731, "num_data_blocks": 97, "num_entries": 677, "num_filter_entries": 677, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193681, "oldest_key_time": 1769193681, "file_creation_time": 1769193811, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 16945 microseconds, and 7488 cpu microseconds.
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:43:31.143589) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1188450 bytes OK
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:43:31.143610) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:43:31.145396) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:43:31.145413) EVENT_LOG_v1 {"time_micros": 1769193811145408, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:43:31.145432) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1998691, prev total WAL file size 1998691, number of live WAL files 2.
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:43:31.146411) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1160KB)], [20(7825KB)]
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193811146529, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9201502, "oldest_snapshot_seqno": -1}
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3351 keys, 7039941 bytes, temperature: kUnknown
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193811224806, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7039941, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7014299, "index_size": 16189, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 80547, "raw_average_key_size": 24, "raw_value_size": 6950461, "raw_average_value_size": 2074, "num_data_blocks": 716, "num_entries": 3351, "num_filter_entries": 3351, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 1769193811, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 18:43:31 compute-0 ceph-mon[75097]: pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:43:31.225006) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7039941 bytes
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:43:31.227269) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.5 rd, 89.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 7.6 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(13.7) write-amplify(5.9) OK, records in: 3808, records dropped: 457 output_compression: NoCompression
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:43:31.227341) EVENT_LOG_v1 {"time_micros": 1769193811227311, "job": 6, "event": "compaction_finished", "compaction_time_micros": 78324, "compaction_time_cpu_micros": 35287, "output_level": 6, "num_output_files": 1, "total_output_size": 7039941, "num_input_records": 3808, "num_output_records": 3351, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193811227939, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193811229618, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:43:31.146245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:43:31.229692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:43:31.229700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:43:31.229704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:43:31.229707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:43:31 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:43:31.229710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:43:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:33 compute-0 ceph-mon[75097]: pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:35 compute-0 ceph-mon[75097]: pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:43:36 compute-0 sshd-session[133636]: Accepted publickey for zuul from 192.168.122.30 port 43552 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:43:36 compute-0 systemd-logind[792]: New session 44 of user zuul.
Jan 23 18:43:36 compute-0 systemd[1]: Started Session 44 of User zuul.
Jan 23 18:43:36 compute-0 sshd-session[133636]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:43:36 compute-0 sudo[133789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyvsyuxqfhbivacxxoacyjrktnhppyxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193816.279033-17-80422320078406/AnsiballZ_file.py'
Jan 23 18:43:36 compute-0 sudo[133789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:36 compute-0 python3.9[133791]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:37 compute-0 sudo[133789]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:37 compute-0 ceph-mon[75097]: pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:37 compute-0 sudo[133941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhrbedkefyismphwcsqofvtkyfihxldb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193817.2705278-29-262835753926519/AnsiballZ_stat.py'
Jan 23 18:43:37 compute-0 sudo[133941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:37 compute-0 python3.9[133943]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:37 compute-0 sudo[133941]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:38 compute-0 sudo[134064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtdtjojklqhnvjoouxcvdbgpggnoentx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193817.2705278-29-262835753926519/AnsiballZ_copy.py'
Jan 23 18:43:38 compute-0 sudo[134064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:38 compute-0 ceph-mon[75097]: pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:38 compute-0 python3.9[134066]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769193817.2705278-29-262835753926519/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=2d2b8b0157954eb4dc62e722466d3a90ab6a11ed backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:38 compute-0 sudo[134064]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:39 compute-0 sudo[134216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duijdgrognsfzifmempmvlvldogdjcir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193818.8996522-29-43223539555637/AnsiballZ_stat.py'
Jan 23 18:43:39 compute-0 sudo[134216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:39 compute-0 python3.9[134218]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:39 compute-0 sudo[134216]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:39 compute-0 sudo[134339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqeymoxfadpjtnufzhprlroillrwhbxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193818.8996522-29-43223539555637/AnsiballZ_copy.py'
Jan 23 18:43:39 compute-0 sudo[134339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:39 compute-0 python3.9[134341]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769193818.8996522-29-43223539555637/.source.conf _original_basename=ceph.conf follow=False checksum=d2eea07c0de0ca9f8744f3228888151a625cf5d7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:39 compute-0 sudo[134339]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:43:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:43:40 compute-0 sshd-session[133639]: Connection closed by 192.168.122.30 port 43552
Jan 23 18:43:40 compute-0 sshd-session[133636]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:43:40 compute-0 systemd-logind[792]: Session 44 logged out. Waiting for processes to exit.
Jan 23 18:43:40 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Jan 23 18:43:40 compute-0 systemd[1]: session-44.scope: Consumed 2.878s CPU time.
Jan 23 18:43:40 compute-0 systemd-logind[792]: Removed session 44.
Jan 23 18:43:41 compute-0 ceph-mon[75097]: pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:43:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:43 compute-0 ceph-mon[75097]: pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:45 compute-0 sshd-session[134366]: Accepted publickey for zuul from 192.168.122.30 port 45556 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:43:45 compute-0 systemd-logind[792]: New session 45 of user zuul.
Jan 23 18:43:45 compute-0 systemd[1]: Started Session 45 of User zuul.
Jan 23 18:43:45 compute-0 ceph-mon[75097]: pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:45 compute-0 sshd-session[134366]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:43:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:46 compute-0 python3.9[134519]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:43:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:43:47 compute-0 sudo[134673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aafwpdeyhclnutaqmodnqcqmrnxuxajo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193826.5604753-29-87748681400192/AnsiballZ_file.py'
Jan 23 18:43:47 compute-0 sudo[134673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:47 compute-0 ceph-mon[75097]: pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:47 compute-0 python3.9[134675]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:43:47 compute-0 sudo[134673]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:47 compute-0 sudo[134825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnyeqgyzhhhhjyundkjdxkosidvjwiot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193827.3877852-29-86656055725426/AnsiballZ_file.py'
Jan 23 18:43:47 compute-0 sudo[134825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:47 compute-0 python3.9[134827]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:43:47 compute-0 sudo[134825]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:48 compute-0 python3.9[134977]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:43:49 compute-0 ceph-mon[75097]: pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:49 compute-0 sudo[135127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxvabmobbfktnmgmzhcwqcyykzmkihng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193828.8546007-52-57992023455855/AnsiballZ_seboolean.py'
Jan 23 18:43:49 compute-0 sudo[135127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:49 compute-0 python3.9[135129]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 23 18:43:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:50 compute-0 sudo[135127]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:43:51 compute-0 ceph-mon[75097]: pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:51 compute-0 sudo[135283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiyrpbceglruoaoenzerlokyccbggykg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193831.1425707-62-206044321701191/AnsiballZ_setup.py'
Jan 23 18:43:51 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 23 18:43:51 compute-0 sudo[135283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:51 compute-0 python3.9[135285]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:43:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:52 compute-0 sudo[135283]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:52 compute-0 ceph-mon[75097]: pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:52 compute-0 sudo[135367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydilibyehgslfutbhunepqskcmfopela ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193831.1425707-62-206044321701191/AnsiballZ_dnf.py'
Jan 23 18:43:52 compute-0 sudo[135367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:53 compute-0 python3.9[135369]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:43:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:54 compute-0 sudo[135367]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:55 compute-0 ceph-mon[75097]: pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:55 compute-0 sudo[135520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vamlpbykbhnwzkspckqvatbxvjfklixp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193834.6000636-74-43158987388935/AnsiballZ_systemd.py'
Jan 23 18:43:55 compute-0 sudo[135520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:55 compute-0 python3.9[135522]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 18:43:55 compute-0 sudo[135520]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:43:56 compute-0 sudo[135675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pobtvkzbjzeawzndhziyrsgjuoqvofje ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769193835.7981486-82-111271354579363/AnsiballZ_edpm_nftables_snippet.py'
Jan 23 18:43:56 compute-0 sudo[135675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:56 compute-0 python3[135677]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 23 18:43:56 compute-0 sudo[135675]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:56 compute-0 sudo[135827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goqiexvijcudyvumkekuysncaubzhbhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193836.704929-91-163838194420285/AnsiballZ_file.py'
Jan 23 18:43:56 compute-0 sudo[135827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:57 compute-0 ceph-mon[75097]: pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:57 compute-0 python3.9[135829]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:57 compute-0 sudo[135827]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:57 compute-0 sudo[135979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sagohtzersgdpzfisjfkbejpaakhipzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193837.3688834-99-173518919452549/AnsiballZ_stat.py'
Jan 23 18:43:57 compute-0 sudo[135979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:58 compute-0 python3.9[135981]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:58 compute-0 sudo[135979]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:58 compute-0 sudo[136057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqnvpnilmkcuefkneeaczgpohiapfymn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193837.3688834-99-173518919452549/AnsiballZ_file.py'
Jan 23 18:43:58 compute-0 sudo[136057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:58 compute-0 python3.9[136059]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:58 compute-0 sudo[136057]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:59 compute-0 sudo[136209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qntcmuoissjhsmqiqrgcgatdblgwismr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193838.8268344-111-18152840385299/AnsiballZ_stat.py'
Jan 23 18:43:59 compute-0 sudo[136209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:59 compute-0 ceph-mon[75097]: pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:43:59 compute-0 python3.9[136211]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:43:59 compute-0 sudo[136209]: pam_unix(sudo:session): session closed for user root
Jan 23 18:43:59 compute-0 sudo[136288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iogdqesvuaqlrnpmuzmonqccxorjfmcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193838.8268344-111-18152840385299/AnsiballZ_file.py'
Jan 23 18:43:59 compute-0 sudo[136288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:43:59 compute-0 python3.9[136290]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ik013a_o recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:43:59 compute-0 sudo[136288]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:44:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:44:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:44:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:44:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:44:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:44:00 compute-0 sudo[136440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyaqenqvpyncnkdpwchsfoumfdnyhtro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193840.0884948-123-272834489189123/AnsiballZ_stat.py'
Jan 23 18:44:00 compute-0 sudo[136440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:00 compute-0 python3.9[136442]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:44:00 compute-0 sudo[136440]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:00 compute-0 sudo[136518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyilwpuyjvcrfniyhqnafpikyuhilufa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193840.0884948-123-272834489189123/AnsiballZ_file.py'
Jan 23 18:44:00 compute-0 sudo[136518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:01 compute-0 python3.9[136520]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:44:01 compute-0 sudo[136518]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:44:01 compute-0 ceph-mon[75097]: pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:01 compute-0 sudo[136670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qavkwjgylnjtryqasomsnguwwxzzetbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193841.30126-136-33531219298145/AnsiballZ_command.py'
Jan 23 18:44:01 compute-0 sudo[136670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:01 compute-0 python3.9[136672]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:44:02 compute-0 sudo[136670]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:02 compute-0 sudo[136823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yygltxpdgnqeggfhnaoxwpfcnoqfitfj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769193842.2197926-144-53902502340891/AnsiballZ_edpm_nftables_from_files.py'
Jan 23 18:44:02 compute-0 sudo[136823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:02 compute-0 python3[136825]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 23 18:44:02 compute-0 sudo[136823]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:03 compute-0 ceph-mon[75097]: pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:03 compute-0 sudo[136975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjbbkmfnzfjklcmnzahyjznbjxkirscn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193842.9827113-152-28952118607988/AnsiballZ_stat.py'
Jan 23 18:44:03 compute-0 sudo[136975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:03 compute-0 python3.9[136977]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:44:03 compute-0 sudo[136975]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:04 compute-0 sudo[137100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teocvnrfhkxaqkqtsngggsfflyaobcha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193842.9827113-152-28952118607988/AnsiballZ_copy.py'
Jan 23 18:44:04 compute-0 sudo[137100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:04 compute-0 python3.9[137102]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193842.9827113-152-28952118607988/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:44:04 compute-0 sudo[137100]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:04 compute-0 sudo[137252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdokfyaqjnhuvmijiihoettpzkpkufup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193844.4277527-167-57873062202231/AnsiballZ_stat.py'
Jan 23 18:44:04 compute-0 sudo[137252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:04 compute-0 python3.9[137254]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:44:04 compute-0 sudo[137252]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:05 compute-0 ceph-mon[75097]: pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:05 compute-0 sudo[137377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahjkjwmxxbcvmptecnscjrdrimnsujnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193844.4277527-167-57873062202231/AnsiballZ_copy.py'
Jan 23 18:44:05 compute-0 sudo[137377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:05 compute-0 python3.9[137379]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193844.4277527-167-57873062202231/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:44:05 compute-0 sudo[137377]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:05 compute-0 sudo[137529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibptxexueszywcfdxnlorcawnzgddhld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193845.7089925-182-115916650443819/AnsiballZ_stat.py'
Jan 23 18:44:05 compute-0 sudo[137529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:44:06 compute-0 python3.9[137531]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:44:06 compute-0 sudo[137529]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:06 compute-0 sudo[137654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jedweimdhxtpktfoyzawwbtmxyoqsvvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193845.7089925-182-115916650443819/AnsiballZ_copy.py'
Jan 23 18:44:06 compute-0 sudo[137654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:06 compute-0 python3.9[137656]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193845.7089925-182-115916650443819/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:44:06 compute-0 sudo[137654]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:06 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 18:44:06 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 1995 writes, 8956 keys, 1995 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 1995 writes, 1995 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1995 writes, 8956 keys, 1995 commit groups, 1.0 writes per commit group, ingest: 11.89 MB, 0.02 MB/s
                                           Interval WAL: 1995 writes, 1995 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     44.2      0.20              0.02         3    0.067       0      0       0.0       0.0
                                             L6      1/0    6.71 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6     40.7     35.5      0.40              0.05         2    0.202    7228    746       0.0       0.0
                                            Sum      1/0    6.71 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     27.2     38.4      0.60              0.08         5    0.121    7228    746       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     27.4     38.5      0.60              0.08         4    0.150    7228    746       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     40.7     35.5      0.40              0.05         2    0.202    7228    746       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     44.9      0.20              0.02         2    0.098       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.009, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.6 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5639441df8d0#2 capacity: 308.00 MB usage: 657.62 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(39,570.34 KB,0.180836%) FilterBlock(6,28.36 KB,0.00899179%) IndexBlock(6,58.92 KB,0.0186821%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 23 18:44:07 compute-0 sudo[137806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wudujcanxcbpvxhvxrrsyqvpsflzlpep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193846.900579-197-159224315703780/AnsiballZ_stat.py'
Jan 23 18:44:07 compute-0 sudo[137806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:07 compute-0 ceph-mon[75097]: pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:07 compute-0 python3.9[137808]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:44:07 compute-0 sudo[137806]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:07 compute-0 sudo[137931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnyqkvuwbvbdjiizpduwetmuetliqytq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193846.900579-197-159224315703780/AnsiballZ_copy.py'
Jan 23 18:44:07 compute-0 sudo[137931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:07 compute-0 python3.9[137933]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193846.900579-197-159224315703780/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:44:07 compute-0 sudo[137931]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:08 compute-0 ceph-mon[75097]: pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:08 compute-0 sudo[138083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exedcbnrdjsytlyxqplnjhnwpnbsocxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193848.137145-212-232074640625989/AnsiballZ_stat.py'
Jan 23 18:44:08 compute-0 sudo[138083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:08 compute-0 python3.9[138085]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:44:08 compute-0 sudo[138083]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:09 compute-0 sudo[138208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlhnnnsopwrppdwkwlorxyzjvsczdtev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193848.137145-212-232074640625989/AnsiballZ_copy.py'
Jan 23 18:44:09 compute-0 sudo[138208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:09 compute-0 python3.9[138210]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769193848.137145-212-232074640625989/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:44:09 compute-0 sudo[138208]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:09 compute-0 sudo[138360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbzyuguyfgqpydmopzvyxwjplevyklsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193849.5704634-227-136765418984906/AnsiballZ_file.py'
Jan 23 18:44:09 compute-0 sudo[138360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:10 compute-0 python3.9[138362]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:44:10 compute-0 sudo[138360]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:10 compute-0 sudo[138512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyaotsykbftgwyxnedixaovlwjjpviyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193850.3404653-235-255602567917319/AnsiballZ_command.py'
Jan 23 18:44:10 compute-0 sudo[138512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:10 compute-0 python3.9[138514]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:44:10 compute-0 sudo[138512]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:11 compute-0 ceph-mon[75097]: pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:44:11 compute-0 sudo[138667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmywmrxznxtnzmrobbparnejihxdktgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193851.01817-243-200773670817447/AnsiballZ_blockinfile.py'
Jan 23 18:44:11 compute-0 sudo[138667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:11 compute-0 python3.9[138669]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:44:11 compute-0 sudo[138667]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:12 compute-0 sudo[138819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epcvlsvvcegpjgcilqrkiheuzhpldlfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193851.9513693-252-207281443516621/AnsiballZ_command.py'
Jan 23 18:44:12 compute-0 sudo[138819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:12 compute-0 python3.9[138821]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:44:12 compute-0 sudo[138819]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:12 compute-0 sudo[138972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpscgpormjanbwjurepgrbiwmcuzvgxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193852.6622505-260-133623919551995/AnsiballZ_stat.py'
Jan 23 18:44:12 compute-0 sudo[138972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:13 compute-0 ceph-mon[75097]: pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:13 compute-0 python3.9[138974]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:44:13 compute-0 sudo[138972]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:13 compute-0 sudo[139126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmingnzvwjycezlukoquisklddtdlthz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193853.3908186-268-205570704169795/AnsiballZ_command.py'
Jan 23 18:44:13 compute-0 sudo[139126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:13 compute-0 python3.9[139128]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:44:13 compute-0 sudo[139126]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:14 compute-0 sudo[139281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrhbzdnyzxdlmswfyxgxecqzstkiiquc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193854.170086-276-127735933342074/AnsiballZ_file.py'
Jan 23 18:44:14 compute-0 sudo[139281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:14 compute-0 python3.9[139283]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:44:14 compute-0 sudo[139281]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:15 compute-0 ceph-mon[75097]: pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:15 compute-0 python3.9[139433]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:44:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:44:16 compute-0 sudo[139584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btpxfmkkuetcmevskrvunftpnlrobpmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193856.4946718-316-262270822800123/AnsiballZ_command.py'
Jan 23 18:44:16 compute-0 sudo[139584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:17 compute-0 python3.9[139586]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:44:17 compute-0 ovs-vsctl[139587]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 23 18:44:17 compute-0 sudo[139584]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:17 compute-0 ceph-mon[75097]: pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:17 compute-0 sudo[139737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtiaekhgmxmovaqdyrmzownluafmmiuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193857.3299334-325-24244002811953/AnsiballZ_command.py'
Jan 23 18:44:17 compute-0 sudo[139737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:17 compute-0 python3.9[139739]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:44:17 compute-0 sudo[139737]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:18 compute-0 sudo[139892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szwzzxyakcjiclsunvdorrxqcmsgikcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193858.036787-333-159413432393062/AnsiballZ_command.py'
Jan 23 18:44:18 compute-0 sudo[139892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:18 compute-0 python3.9[139894]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:44:18 compute-0 ovs-vsctl[139895]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 23 18:44:18 compute-0 sudo[139892]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:19 compute-0 ceph-mon[75097]: pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:19 compute-0 python3.9[140045]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:44:19 compute-0 sudo[140197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gccqdrbexvqrowdwjdkumrhekgxgguqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193859.5885558-350-142254868147785/AnsiballZ_file.py'
Jan 23 18:44:19 compute-0 sudo[140197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:20 compute-0 python3.9[140199]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:44:20 compute-0 sudo[140197]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:20 compute-0 sudo[140349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnvnpkfbjvlxmemoiejaxjbhezyxcibm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193860.3140767-358-233889358669902/AnsiballZ_stat.py'
Jan 23 18:44:20 compute-0 sudo[140349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:20 compute-0 python3.9[140351]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:44:20 compute-0 sudo[140349]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:20 compute-0 sudo[140356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:44:20 compute-0 sudo[140356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:44:20 compute-0 sudo[140356]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:20 compute-0 sudo[140402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:44:20 compute-0 sudo[140402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:44:21 compute-0 sudo[140477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhzrurhjwppgapfqudfncdczapbqkjpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193860.3140767-358-233889358669902/AnsiballZ_file.py'
Jan 23 18:44:21 compute-0 sudo[140477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:44:21 compute-0 ceph-mon[75097]: pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:21 compute-0 python3.9[140479]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:44:21 compute-0 sudo[140477]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:21 compute-0 sudo[140402]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:44:21 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:44:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:44:21 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:44:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:44:21 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:44:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:44:21 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:44:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:44:21 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:44:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:44:21 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:44:21 compute-0 sudo[140610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:44:21 compute-0 sudo[140610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:44:21 compute-0 sudo[140610]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:21 compute-0 sudo[140659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:44:21 compute-0 sudo[140659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:44:21 compute-0 sudo[140710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsfndgbtgernfqakvsinneawpxiipcfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193861.407108-358-245184886251540/AnsiballZ_stat.py'
Jan 23 18:44:21 compute-0 sudo[140710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:21 compute-0 python3.9[140712]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:44:21 compute-0 sudo[140710]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:22 compute-0 podman[140728]: 2026-01-23 18:44:22.001203914 +0000 UTC m=+0.076376162 container create afb24a3437bb54688c1109a3a23b65ab3aedf614dd573be8c5cb0805ad5d550f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kapitsa, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:44:22 compute-0 systemd[1]: Started libpod-conmon-afb24a3437bb54688c1109a3a23b65ab3aedf614dd573be8c5cb0805ad5d550f.scope.
Jan 23 18:44:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:22 compute-0 podman[140728]: 2026-01-23 18:44:21.968974167 +0000 UTC m=+0.044146465 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:44:22 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:44:22 compute-0 podman[140728]: 2026-01-23 18:44:22.132106195 +0000 UTC m=+0.207278463 container init afb24a3437bb54688c1109a3a23b65ab3aedf614dd573be8c5cb0805ad5d550f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kapitsa, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 23 18:44:22 compute-0 podman[140728]: 2026-01-23 18:44:22.147932986 +0000 UTC m=+0.223105244 container start afb24a3437bb54688c1109a3a23b65ab3aedf614dd573be8c5cb0805ad5d550f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 18:44:22 compute-0 podman[140728]: 2026-01-23 18:44:22.151861864 +0000 UTC m=+0.227034202 container attach afb24a3437bb54688c1109a3a23b65ab3aedf614dd573be8c5cb0805ad5d550f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kapitsa, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 23 18:44:22 compute-0 gallant_kapitsa[140769]: 167 167
Jan 23 18:44:22 compute-0 systemd[1]: libpod-afb24a3437bb54688c1109a3a23b65ab3aedf614dd573be8c5cb0805ad5d550f.scope: Deactivated successfully.
Jan 23 18:44:22 compute-0 podman[140728]: 2026-01-23 18:44:22.159444462 +0000 UTC m=+0.234616750 container died afb24a3437bb54688c1109a3a23b65ab3aedf614dd573be8c5cb0805ad5d550f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:44:22 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:44:22 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:44:22 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:44:22 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:44:22 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:44:22 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:44:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a3fbe68b872e29d4c397d033506e524f3dba22935e27cc1b4c13e703fad40c1-merged.mount: Deactivated successfully.
Jan 23 18:44:22 compute-0 sudo[140824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uktqlukplqdjznvtivcvjleujsnahrse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193861.407108-358-245184886251540/AnsiballZ_file.py'
Jan 23 18:44:22 compute-0 sudo[140824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:22 compute-0 podman[140728]: 2026-01-23 18:44:22.211221214 +0000 UTC m=+0.286393462 container remove afb24a3437bb54688c1109a3a23b65ab3aedf614dd573be8c5cb0805ad5d550f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kapitsa, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 18:44:22 compute-0 systemd[1]: libpod-conmon-afb24a3437bb54688c1109a3a23b65ab3aedf614dd573be8c5cb0805ad5d550f.scope: Deactivated successfully.
Jan 23 18:44:22 compute-0 podman[140843]: 2026-01-23 18:44:22.388393439 +0000 UTC m=+0.045675831 container create 0466404b2934cb07bc464d7497910bff60ce289995f39309dc4829812932e4d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_poincare, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:44:22 compute-0 python3.9[140835]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:44:22 compute-0 sudo[140824]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:22 compute-0 systemd[1]: Started libpod-conmon-0466404b2934cb07bc464d7497910bff60ce289995f39309dc4829812932e4d2.scope.
Jan 23 18:44:22 compute-0 podman[140843]: 2026-01-23 18:44:22.365345939 +0000 UTC m=+0.022628361 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:44:22 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a3e7d95b589eecf5a431c3482980a31992eac2ec85cc31c3ef75a5aff0cb9cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a3e7d95b589eecf5a431c3482980a31992eac2ec85cc31c3ef75a5aff0cb9cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a3e7d95b589eecf5a431c3482980a31992eac2ec85cc31c3ef75a5aff0cb9cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a3e7d95b589eecf5a431c3482980a31992eac2ec85cc31c3ef75a5aff0cb9cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a3e7d95b589eecf5a431c3482980a31992eac2ec85cc31c3ef75a5aff0cb9cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:44:22 compute-0 podman[140843]: 2026-01-23 18:44:22.487997655 +0000 UTC m=+0.145280117 container init 0466404b2934cb07bc464d7497910bff60ce289995f39309dc4829812932e4d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:44:22 compute-0 podman[140843]: 2026-01-23 18:44:22.494733762 +0000 UTC m=+0.152016154 container start 0466404b2934cb07bc464d7497910bff60ce289995f39309dc4829812932e4d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_poincare, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 18:44:22 compute-0 podman[140843]: 2026-01-23 18:44:22.497951062 +0000 UTC m=+0.155233544 container attach 0466404b2934cb07bc464d7497910bff60ce289995f39309dc4829812932e4d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:44:22 compute-0 sudo[141024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knnwegprcdegelutidagzesjrrxjtmlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193862.599942-381-71967172596036/AnsiballZ_file.py'
Jan 23 18:44:22 compute-0 sudo[141024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:23 compute-0 quizzical_poincare[140860]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:44:23 compute-0 quizzical_poincare[140860]: --> All data devices are unavailable
Jan 23 18:44:23 compute-0 systemd[1]: libpod-0466404b2934cb07bc464d7497910bff60ce289995f39309dc4829812932e4d2.scope: Deactivated successfully.
Jan 23 18:44:23 compute-0 podman[140843]: 2026-01-23 18:44:23.0710464 +0000 UTC m=+0.728328822 container died 0466404b2934cb07bc464d7497910bff60ce289995f39309dc4829812932e4d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_poincare, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:44:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a3e7d95b589eecf5a431c3482980a31992eac2ec85cc31c3ef75a5aff0cb9cd-merged.mount: Deactivated successfully.
Jan 23 18:44:23 compute-0 podman[140843]: 2026-01-23 18:44:23.123249123 +0000 UTC m=+0.780531505 container remove 0466404b2934cb07bc464d7497910bff60ce289995f39309dc4829812932e4d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_poincare, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:44:23 compute-0 systemd[1]: libpod-conmon-0466404b2934cb07bc464d7497910bff60ce289995f39309dc4829812932e4d2.scope: Deactivated successfully.
Jan 23 18:44:23 compute-0 sudo[140659]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:23 compute-0 python3.9[141027]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:44:23 compute-0 ceph-mon[75097]: pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:23 compute-0 sudo[141024]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:23 compute-0 sudo[141043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:44:23 compute-0 sudo[141043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:44:23 compute-0 sudo[141043]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:23 compute-0 sudo[141091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:44:23 compute-0 sudo[141091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:44:23 compute-0 podman[141227]: 2026-01-23 18:44:23.627289701 +0000 UTC m=+0.056318155 container create fc702f9d4881afdd518b3a6274a0983b55b50795d18d830cd4170687fbfb2043 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lehmann, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 18:44:23 compute-0 systemd[1]: Started libpod-conmon-fc702f9d4881afdd518b3a6274a0983b55b50795d18d830cd4170687fbfb2043.scope.
Jan 23 18:44:23 compute-0 sudo[141268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhkabvyczldbukyyfvnokxpzasypuxxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193863.3444984-389-130717507516497/AnsiballZ_stat.py'
Jan 23 18:44:23 compute-0 sudo[141268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:23 compute-0 podman[141227]: 2026-01-23 18:44:23.60700937 +0000 UTC m=+0.036037844 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:44:23 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:44:23 compute-0 podman[141227]: 2026-01-23 18:44:23.736509585 +0000 UTC m=+0.165538109 container init fc702f9d4881afdd518b3a6274a0983b55b50795d18d830cd4170687fbfb2043 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lehmann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 18:44:23 compute-0 podman[141227]: 2026-01-23 18:44:23.751492706 +0000 UTC m=+0.180521170 container start fc702f9d4881afdd518b3a6274a0983b55b50795d18d830cd4170687fbfb2043 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lehmann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 18:44:23 compute-0 podman[141227]: 2026-01-23 18:44:23.756299915 +0000 UTC m=+0.185328369 container attach fc702f9d4881afdd518b3a6274a0983b55b50795d18d830cd4170687fbfb2043 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lehmann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 23 18:44:23 compute-0 quirky_lehmann[141272]: 167 167
Jan 23 18:44:23 compute-0 systemd[1]: libpod-fc702f9d4881afdd518b3a6274a0983b55b50795d18d830cd4170687fbfb2043.scope: Deactivated successfully.
Jan 23 18:44:23 compute-0 podman[141227]: 2026-01-23 18:44:23.760016337 +0000 UTC m=+0.189044821 container died fc702f9d4881afdd518b3a6274a0983b55b50795d18d830cd4170687fbfb2043 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lehmann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 18:44:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7df145322aabe29e633e57d27dbe602728bcf4a51a9b6566358e70b759c56788-merged.mount: Deactivated successfully.
Jan 23 18:44:23 compute-0 podman[141227]: 2026-01-23 18:44:23.807210956 +0000 UTC m=+0.236239380 container remove fc702f9d4881afdd518b3a6274a0983b55b50795d18d830cd4170687fbfb2043 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 18:44:23 compute-0 systemd[1]: libpod-conmon-fc702f9d4881afdd518b3a6274a0983b55b50795d18d830cd4170687fbfb2043.scope: Deactivated successfully.
Jan 23 18:44:23 compute-0 python3.9[141274]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:44:23 compute-0 sudo[141268]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:23 compute-0 podman[141300]: 2026-01-23 18:44:23.975081501 +0000 UTC m=+0.046110733 container create 180783be4472099382f9a981a36658900b8141eb5b860e5cbe65f103b81716db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lovelace, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:44:24 compute-0 systemd[1]: Started libpod-conmon-180783be4472099382f9a981a36658900b8141eb5b860e5cbe65f103b81716db.scope.
Jan 23 18:44:24 compute-0 podman[141300]: 2026-01-23 18:44:23.95646535 +0000 UTC m=+0.027494622 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:44:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:24 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98beb367f753ef6d22327c04428fbe900f1d77fee1575ee43a382a343332627/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98beb367f753ef6d22327c04428fbe900f1d77fee1575ee43a382a343332627/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98beb367f753ef6d22327c04428fbe900f1d77fee1575ee43a382a343332627/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98beb367f753ef6d22327c04428fbe900f1d77fee1575ee43a382a343332627/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:44:24 compute-0 podman[141300]: 2026-01-23 18:44:24.088021658 +0000 UTC m=+0.159050990 container init 180783be4472099382f9a981a36658900b8141eb5b860e5cbe65f103b81716db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lovelace, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 18:44:24 compute-0 podman[141300]: 2026-01-23 18:44:24.108431973 +0000 UTC m=+0.179461245 container start 180783be4472099382f9a981a36658900b8141eb5b860e5cbe65f103b81716db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 18:44:24 compute-0 podman[141300]: 2026-01-23 18:44:24.113687143 +0000 UTC m=+0.184716425 container attach 180783be4472099382f9a981a36658900b8141eb5b860e5cbe65f103b81716db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 23 18:44:24 compute-0 sudo[141394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffvrmhkxhwkeqbiljskwmrzllmjkiaha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193863.3444984-389-130717507516497/AnsiballZ_file.py'
Jan 23 18:44:24 compute-0 sudo[141394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:24 compute-0 python3.9[141396]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:44:24 compute-0 zen_lovelace[141339]: {
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:     "0": [
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:         {
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "devices": [
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "/dev/loop3"
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             ],
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "lv_name": "ceph_lv0",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "lv_size": "21470642176",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "name": "ceph_lv0",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "tags": {
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.cluster_name": "ceph",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.crush_device_class": "",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.encrypted": "0",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.objectstore": "bluestore",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.osd_id": "0",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.type": "block",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.vdo": "0",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.with_tpm": "0"
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             },
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "type": "block",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "vg_name": "ceph_vg0"
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:         }
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:     ],
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:     "1": [
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:         {
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "devices": [
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "/dev/loop4"
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             ],
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "lv_name": "ceph_lv1",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "lv_size": "21470642176",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "name": "ceph_lv1",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "tags": {
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.cluster_name": "ceph",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.crush_device_class": "",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.encrypted": "0",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.objectstore": "bluestore",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.osd_id": "1",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.type": "block",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.vdo": "0",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.with_tpm": "0"
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             },
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "type": "block",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "vg_name": "ceph_vg1"
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:         }
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:     ],
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:     "2": [
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:         {
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "devices": [
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "/dev/loop5"
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             ],
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "lv_name": "ceph_lv2",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "lv_size": "21470642176",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "name": "ceph_lv2",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "tags": {
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.cluster_name": "ceph",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.crush_device_class": "",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.encrypted": "0",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.objectstore": "bluestore",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.osd_id": "2",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.type": "block",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.vdo": "0",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:                 "ceph.with_tpm": "0"
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             },
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "type": "block",
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:             "vg_name": "ceph_vg2"
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:         }
Jan 23 18:44:24 compute-0 zen_lovelace[141339]:     ]
Jan 23 18:44:24 compute-0 zen_lovelace[141339]: }
Jan 23 18:44:24 compute-0 sudo[141394]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:24 compute-0 systemd[1]: libpod-180783be4472099382f9a981a36658900b8141eb5b860e5cbe65f103b81716db.scope: Deactivated successfully.
Jan 23 18:44:24 compute-0 podman[141300]: 2026-01-23 18:44:24.464780225 +0000 UTC m=+0.535809507 container died 180783be4472099382f9a981a36658900b8141eb5b860e5cbe65f103b81716db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:44:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f98beb367f753ef6d22327c04428fbe900f1d77fee1575ee43a382a343332627-merged.mount: Deactivated successfully.
Jan 23 18:44:24 compute-0 podman[141300]: 2026-01-23 18:44:24.516993907 +0000 UTC m=+0.588023149 container remove 180783be4472099382f9a981a36658900b8141eb5b860e5cbe65f103b81716db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 18:44:24 compute-0 systemd[1]: libpod-conmon-180783be4472099382f9a981a36658900b8141eb5b860e5cbe65f103b81716db.scope: Deactivated successfully.
Jan 23 18:44:24 compute-0 sudo[141091]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:24 compute-0 sudo[141442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:44:24 compute-0 sudo[141442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:44:24 compute-0 sudo[141442]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:24 compute-0 sudo[141494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:44:24 compute-0 sudo[141494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:44:24 compute-0 sudo[141635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xboqeugugnbwovikpjtuoryrzbuacvmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193864.6263535-401-227789535002369/AnsiballZ_stat.py'
Jan 23 18:44:24 compute-0 sudo[141635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:24 compute-0 podman[141607]: 2026-01-23 18:44:24.960776414 +0000 UTC m=+0.037270423 container create de7dfa65970a419197213ef5e82a631af1516245dac9c75c5a778f4e1b1a91db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 23 18:44:24 compute-0 systemd[1]: Started libpod-conmon-de7dfa65970a419197213ef5e82a631af1516245dac9c75c5a778f4e1b1a91db.scope.
Jan 23 18:44:25 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:44:25 compute-0 podman[141607]: 2026-01-23 18:44:24.945700172 +0000 UTC m=+0.022194201 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:44:25 compute-0 podman[141607]: 2026-01-23 18:44:25.042213871 +0000 UTC m=+0.118707910 container init de7dfa65970a419197213ef5e82a631af1516245dac9c75c5a778f4e1b1a91db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 18:44:25 compute-0 podman[141607]: 2026-01-23 18:44:25.048727062 +0000 UTC m=+0.125221101 container start de7dfa65970a419197213ef5e82a631af1516245dac9c75c5a778f4e1b1a91db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 18:44:25 compute-0 podman[141607]: 2026-01-23 18:44:25.05188751 +0000 UTC m=+0.128381539 container attach de7dfa65970a419197213ef5e82a631af1516245dac9c75c5a778f4e1b1a91db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_almeida, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 23 18:44:25 compute-0 elated_almeida[141644]: 167 167
Jan 23 18:44:25 compute-0 systemd[1]: libpod-de7dfa65970a419197213ef5e82a631af1516245dac9c75c5a778f4e1b1a91db.scope: Deactivated successfully.
Jan 23 18:44:25 compute-0 podman[141607]: 2026-01-23 18:44:25.054094795 +0000 UTC m=+0.130588794 container died de7dfa65970a419197213ef5e82a631af1516245dac9c75c5a778f4e1b1a91db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:44:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f93420fd825d11dc180242983c0e0c8193c48394f8a4cca31854d55fef8becb-merged.mount: Deactivated successfully.
Jan 23 18:44:25 compute-0 podman[141607]: 2026-01-23 18:44:25.093626583 +0000 UTC m=+0.170120592 container remove de7dfa65970a419197213ef5e82a631af1516245dac9c75c5a778f4e1b1a91db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_almeida, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:44:25 compute-0 systemd[1]: libpod-conmon-de7dfa65970a419197213ef5e82a631af1516245dac9c75c5a778f4e1b1a91db.scope: Deactivated successfully.
Jan 23 18:44:25 compute-0 python3.9[141641]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:44:25 compute-0 sudo[141635]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:25 compute-0 ceph-mon[75097]: pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:25 compute-0 podman[141670]: 2026-01-23 18:44:25.252762243 +0000 UTC m=+0.044989084 container create d3d3bc3fe958d332c7d87efa214ad74b8f9553566ec397e886ee0b833f7b0036 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_feynman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 23 18:44:25 compute-0 systemd[1]: Started libpod-conmon-d3d3bc3fe958d332c7d87efa214ad74b8f9553566ec397e886ee0b833f7b0036.scope.
Jan 23 18:44:25 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:44:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f57d2ef545ec7c7108f36a16b4b6a7732d4f93f5c8f4f82aa3b06b088e159002/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:44:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f57d2ef545ec7c7108f36a16b4b6a7732d4f93f5c8f4f82aa3b06b088e159002/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:44:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f57d2ef545ec7c7108f36a16b4b6a7732d4f93f5c8f4f82aa3b06b088e159002/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:44:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f57d2ef545ec7c7108f36a16b4b6a7732d4f93f5c8f4f82aa3b06b088e159002/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:44:25 compute-0 podman[141670]: 2026-01-23 18:44:25.22998617 +0000 UTC m=+0.022213021 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:44:25 compute-0 podman[141670]: 2026-01-23 18:44:25.335764958 +0000 UTC m=+0.127991809 container init d3d3bc3fe958d332c7d87efa214ad74b8f9553566ec397e886ee0b833f7b0036 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 18:44:25 compute-0 podman[141670]: 2026-01-23 18:44:25.344657339 +0000 UTC m=+0.136884170 container start d3d3bc3fe958d332c7d87efa214ad74b8f9553566ec397e886ee0b833f7b0036 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:44:25 compute-0 podman[141670]: 2026-01-23 18:44:25.348994686 +0000 UTC m=+0.141221517 container attach d3d3bc3fe958d332c7d87efa214ad74b8f9553566ec397e886ee0b833f7b0036 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 18:44:25 compute-0 sudo[141764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yostpxwqdzcwmjbbynamtinopxriywfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193864.6263535-401-227789535002369/AnsiballZ_file.py'
Jan 23 18:44:25 compute-0 sudo[141764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:25 compute-0 python3.9[141766]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:44:25 compute-0 sudo[141764]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:25 compute-0 lvm[141940]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:44:25 compute-0 lvm[141937]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:44:25 compute-0 lvm[141940]: VG ceph_vg1 finished
Jan 23 18:44:25 compute-0 lvm[141937]: VG ceph_vg0 finished
Jan 23 18:44:26 compute-0 lvm[141946]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:44:26 compute-0 lvm[141946]: VG ceph_vg2 finished
Jan 23 18:44:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:26 compute-0 sudo[141995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mavkwzupdchdkiroovobnmtyrqssapdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193865.8107312-413-214179403326994/AnsiballZ_systemd.py'
Jan 23 18:44:26 compute-0 sudo[141995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:26 compute-0 kind_feynman[141709]: {}
Jan 23 18:44:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:44:26 compute-0 systemd[1]: libpod-d3d3bc3fe958d332c7d87efa214ad74b8f9553566ec397e886ee0b833f7b0036.scope: Deactivated successfully.
Jan 23 18:44:26 compute-0 podman[141670]: 2026-01-23 18:44:26.157228075 +0000 UTC m=+0.949454906 container died d3d3bc3fe958d332c7d87efa214ad74b8f9553566ec397e886ee0b833f7b0036 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 18:44:26 compute-0 systemd[1]: libpod-d3d3bc3fe958d332c7d87efa214ad74b8f9553566ec397e886ee0b833f7b0036.scope: Consumed 1.264s CPU time.
Jan 23 18:44:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f57d2ef545ec7c7108f36a16b4b6a7732d4f93f5c8f4f82aa3b06b088e159002-merged.mount: Deactivated successfully.
Jan 23 18:44:26 compute-0 podman[141670]: 2026-01-23 18:44:26.207975671 +0000 UTC m=+1.000202502 container remove d3d3bc3fe958d332c7d87efa214ad74b8f9553566ec397e886ee0b833f7b0036 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 18:44:26 compute-0 systemd[1]: libpod-conmon-d3d3bc3fe958d332c7d87efa214ad74b8f9553566ec397e886ee0b833f7b0036.scope: Deactivated successfully.
Jan 23 18:44:26 compute-0 sudo[141494]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:44:26 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:44:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:44:26 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:44:26 compute-0 sudo[142010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:44:26 compute-0 sudo[142010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:44:26 compute-0 sudo[142010]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:26 compute-0 python3.9[141997]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:44:26 compute-0 systemd[1]: Reloading.
Jan 23 18:44:26 compute-0 systemd-sysv-generator[142061]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:44:26 compute-0 systemd-rc-local-generator[142058]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:44:26 compute-0 sudo[141995]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:27 compute-0 sudo[142222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxmfmjwqdgksiipyljoeyymlofxktecl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193866.9045846-421-65609837908688/AnsiballZ_stat.py'
Jan 23 18:44:27 compute-0 sudo[142222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:27 compute-0 ceph-mon[75097]: pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:44:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:44:27 compute-0 python3.9[142224]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:44:27 compute-0 sudo[142222]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:27 compute-0 sudo[142300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpgrildnvwmzimjpggaqmxdnfixvejgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193866.9045846-421-65609837908688/AnsiballZ_file.py'
Jan 23 18:44:27 compute-0 sudo[142300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:27 compute-0 python3.9[142302]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:44:27 compute-0 sudo[142300]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:28 compute-0 sudo[142452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmfxuwcmjanmwruoymcdgtmjlnsfbnzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193867.9744-433-265255612487655/AnsiballZ_stat.py'
Jan 23 18:44:28 compute-0 sudo[142452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:28 compute-0 python3.9[142454]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:44:28 compute-0 sudo[142452]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:28 compute-0 sudo[142530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbzxmgaizxrskewcldgbyapzgldyqypt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193867.9744-433-265255612487655/AnsiballZ_file.py'
Jan 23 18:44:28 compute-0 sudo[142530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:28 compute-0 python3.9[142532]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:44:28 compute-0 sudo[142530]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:44:28
Jan 23 18:44:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:44:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:44:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'backups', '.mgr']
Jan 23 18:44:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:44:29 compute-0 ceph-mon[75097]: pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:29 compute-0 sudo[142682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxjvnxkjcfoxfwgyabndgvcgfpkfyoxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193869.0423365-445-46995400639216/AnsiballZ_systemd.py'
Jan 23 18:44:29 compute-0 sudo[142682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:29 compute-0 python3.9[142684]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:44:29 compute-0 systemd[1]: Reloading.
Jan 23 18:44:29 compute-0 systemd-sysv-generator[142716]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:44:29 compute-0 systemd-rc-local-generator[142711]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:44:29 compute-0 systemd[1]: Starting Create netns directory...
Jan 23 18:44:29 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 23 18:44:29 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 23 18:44:29 compute-0 systemd[1]: Finished Create netns directory.
Jan 23 18:44:29 compute-0 sudo[142682]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:44:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:44:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:44:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:44:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:44:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:44:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:30 compute-0 sudo[142875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-airgexzptdlygfquudyqsyzaehqbttnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193870.23159-455-159021147095822/AnsiballZ_file.py'
Jan 23 18:44:30 compute-0 sudo[142875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:30 compute-0 python3.9[142877]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:44:30 compute-0 sudo[142875]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:44:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:44:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:44:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:44:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:44:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:44:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:44:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:44:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:44:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:44:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:44:31 compute-0 sudo[143027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmfqhzutthijahfnusmyvloxtflnanyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193870.879867-463-269656532656519/AnsiballZ_stat.py'
Jan 23 18:44:31 compute-0 sudo[143027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:31 compute-0 ceph-mon[75097]: pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:31 compute-0 python3.9[143029]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:44:31 compute-0 sudo[143027]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:31 compute-0 sudo[143150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niosttjhyinqaamnpqjrmlwiteiwcvab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193870.879867-463-269656532656519/AnsiballZ_copy.py'
Jan 23 18:44:31 compute-0 sudo[143150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:31 compute-0 python3.9[143152]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769193870.879867-463-269656532656519/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:44:31 compute-0 sudo[143150]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:32 compute-0 sudo[143302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xewvdxaczduebrfzdlzrhlxnohsivxpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193872.2559366-480-233900156015726/AnsiballZ_file.py'
Jan 23 18:44:32 compute-0 sudo[143302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:32 compute-0 python3.9[143304]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:44:32 compute-0 sudo[143302]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:33 compute-0 sudo[143454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-virivpkglfdgrdqvwfmfnryhuyrlvjcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193872.882771-488-166529768332721/AnsiballZ_file.py'
Jan 23 18:44:33 compute-0 sudo[143454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:33 compute-0 ceph-mon[75097]: pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:33 compute-0 python3.9[143456]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:44:33 compute-0 sudo[143454]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:33 compute-0 sudo[143606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpwjskeepjxwamnqyuhsotjheaimmtfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193873.5130389-496-82108108037333/AnsiballZ_stat.py'
Jan 23 18:44:33 compute-0 sudo[143606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:34 compute-0 python3.9[143608]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:44:34 compute-0 sudo[143606]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:34 compute-0 sudo[143729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iawlvhnvuzloyxjychunjzioczwbbpoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193873.5130389-496-82108108037333/AnsiballZ_copy.py'
Jan 23 18:44:34 compute-0 sudo[143729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:34 compute-0 python3.9[143731]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769193873.5130389-496-82108108037333/.source.json _original_basename=.pww_vltg follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:44:34 compute-0 sudo[143729]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:35 compute-0 ceph-mon[75097]: pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:35 compute-0 python3.9[143881]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:44:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:44:37 compute-0 ceph-mon[75097]: pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:37 compute-0 sudo[144302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdessxasxstgebehukailowteylgshfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193877.176613-536-29529989228142/AnsiballZ_container_config_data.py'
Jan 23 18:44:37 compute-0 sudo[144302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:37 compute-0 python3.9[144304]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 23 18:44:37 compute-0 sudo[144302]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:38 compute-0 ceph-mon[75097]: pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:38 compute-0 sudo[144454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dusvceawipnxszcaxftemerrflhnoahx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193878.2208984-547-170611825132904/AnsiballZ_container_config_hash.py'
Jan 23 18:44:38 compute-0 sudo[144454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:38 compute-0 python3.9[144456]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 18:44:39 compute-0 sudo[144454]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:39 compute-0 sudo[144606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fywpbpixlzomccqafdipfgvwhkdmxmgc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769193879.2786615-557-275139471948558/AnsiballZ_edpm_container_manage.py'
Jan 23 18:44:39 compute-0 sudo[144606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:40 compute-0 python3[144608]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:44:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:44:41 compute-0 ceph-mon[75097]: pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:44:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:43 compute-0 ceph-mon[75097]: pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:45 compute-0 ceph-mon[75097]: pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:45 compute-0 podman[144622]: 2026-01-23 18:44:45.338308243 +0000 UTC m=+5.158319797 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 23 18:44:45 compute-0 podman[144738]: 2026-01-23 18:44:45.451938446 +0000 UTC m=+0.021084763 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 23 18:44:45 compute-0 podman[144738]: 2026-01-23 18:44:45.573933277 +0000 UTC m=+0.143079584 container create 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 23 18:44:45 compute-0 python3[144608]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 23 18:44:45 compute-0 sudo[144606]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:44:46 compute-0 sudo[144926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqcdvbhpzcmbalogoarwgyfqjwmdgaia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193885.955945-565-243535520519426/AnsiballZ_stat.py'
Jan 23 18:44:46 compute-0 sudo[144926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:46 compute-0 python3.9[144928]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:44:46 compute-0 sudo[144926]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:47 compute-0 sudo[145080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuybtfkibgkvhkkqijxrdajvofbniuje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193886.6856678-574-153709558363019/AnsiballZ_file.py'
Jan 23 18:44:47 compute-0 sudo[145080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:44:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:51 compute-0 ceph-mds[96162]: mds.beacon.cephfs.compute-0.kgapua missed beacon ack from the monitors
Jan 23 18:44:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:55 compute-0 ceph-mds[96162]: mds.beacon.cephfs.compute-0.kgapua missed beacon ack from the monitors
Jan 23 18:44:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:44:59 compute-0 ceph-mds[96162]: mds.beacon.cephfs.compute-0.kgapua missed beacon ack from the monitors
Jan 23 18:44:59 compute-0 python3.9[145082]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:44:59 compute-0 sudo[145080]: pam_unix(sudo:session): session closed for user root
Jan 23 18:44:59 compute-0 sudo[145159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gegvknnbqaildxpccisddnrwcgvpznjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193886.6856678-574-153709558363019/AnsiballZ_stat.py'
Jan 23 18:44:59 compute-0 sudo[145159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:45:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:45:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:45:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:45:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:45:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:45:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 1e+01 seconds
Jan 23 18:45:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:45:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:02 compute-0 python3.9[145161]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:45:02 compute-0 sudo[145159]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:02 compute-0 ceph-mon[75097]: pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:02 compute-0 ceph-mon[75097]: pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:02 compute-0 ceph-mon[75097]: pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:02 compute-0 ceph-mon[75097]: pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:02 compute-0 ceph-mon[75097]: pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:02 compute-0 ceph-mon[75097]: pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:02 compute-0 ceph-mon[75097]: pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:02 compute-0 ceph-mon[75097]: pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:02 compute-0 sudo[145310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jocxrfgbxxoijiehnsvjmdxeinwxwhpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193902.3808408-574-255813259201093/AnsiballZ_copy.py'
Jan 23 18:45:02 compute-0 sudo[145310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:03 compute-0 python3.9[145312]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769193902.3808408-574-255813259201093/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:45:03 compute-0 sudo[145310]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:03 compute-0 sudo[145386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijbjcqhcsdqytduvaloshvnaafroqmoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193902.3808408-574-255813259201093/AnsiballZ_systemd.py'
Jan 23 18:45:03 compute-0 sudo[145386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:03 compute-0 python3.9[145388]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 18:45:03 compute-0 systemd[1]: Reloading.
Jan 23 18:45:03 compute-0 systemd-rc-local-generator[145413]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:45:03 compute-0 systemd-sysv-generator[145418]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:45:03 compute-0 ceph-mon[75097]: pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:04 compute-0 sudo[145386]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:04 compute-0 sudo[145497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoxzvdbjjycplerhehsbfrsptdfreyxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193902.3808408-574-255813259201093/AnsiballZ_systemd.py'
Jan 23 18:45:04 compute-0 sudo[145497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:04 compute-0 python3.9[145499]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:45:04 compute-0 systemd[1]: Reloading.
Jan 23 18:45:04 compute-0 systemd-rc-local-generator[145532]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:45:04 compute-0 systemd-sysv-generator[145536]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:45:04 compute-0 systemd[1]: Starting ovn_controller container...
Jan 23 18:45:05 compute-0 ceph-mon[75097]: pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:05 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46723b6dba78dbd47855d358b95ec7005c31a873f472ac6ad92282d3dc2da5f6/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 23 18:45:05 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd.
Jan 23 18:45:05 compute-0 podman[145542]: 2026-01-23 18:45:05.453643031 +0000 UTC m=+0.438427675 container init 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 18:45:05 compute-0 ovn_controller[145557]: + sudo -E kolla_set_configs
Jan 23 18:45:05 compute-0 podman[145542]: 2026-01-23 18:45:05.48473157 +0000 UTC m=+0.469516184 container start 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 23 18:45:05 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 23 18:45:05 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 23 18:45:05 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 23 18:45:05 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 23 18:45:05 compute-0 edpm-start-podman-container[145542]: ovn_controller
Jan 23 18:45:05 compute-0 systemd[145577]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 23 18:45:05 compute-0 edpm-start-podman-container[145541]: Creating additional drop-in dependency for "ovn_controller" (0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd)
Jan 23 18:45:05 compute-0 podman[145564]: 2026-01-23 18:45:05.608459244 +0000 UTC m=+0.113937132 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 23 18:45:05 compute-0 systemd[1]: 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd-33c8a361bbf13690.service: Main process exited, code=exited, status=1/FAILURE
Jan 23 18:45:05 compute-0 systemd[1]: 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd-33c8a361bbf13690.service: Failed with result 'exit-code'.
Jan 23 18:45:05 compute-0 systemd[1]: Reloading.
Jan 23 18:45:05 compute-0 systemd[145577]: Queued start job for default target Main User Target.
Jan 23 18:45:05 compute-0 systemd[145577]: Created slice User Application Slice.
Jan 23 18:45:05 compute-0 systemd[145577]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 23 18:45:05 compute-0 systemd[145577]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 18:45:05 compute-0 systemd[145577]: Reached target Paths.
Jan 23 18:45:05 compute-0 systemd[145577]: Reached target Timers.
Jan 23 18:45:05 compute-0 systemd-rc-local-generator[145644]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:45:05 compute-0 systemd-sysv-generator[145648]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:45:05 compute-0 systemd[145577]: Starting D-Bus User Message Bus Socket...
Jan 23 18:45:05 compute-0 systemd[145577]: Starting Create User's Volatile Files and Directories...
Jan 23 18:45:05 compute-0 systemd[145577]: Listening on D-Bus User Message Bus Socket.
Jan 23 18:45:05 compute-0 systemd[145577]: Reached target Sockets.
Jan 23 18:45:05 compute-0 systemd[145577]: Finished Create User's Volatile Files and Directories.
Jan 23 18:45:05 compute-0 systemd[145577]: Reached target Basic System.
Jan 23 18:45:05 compute-0 systemd[145577]: Reached target Main User Target.
Jan 23 18:45:05 compute-0 systemd[145577]: Startup finished in 161ms.
Jan 23 18:45:05 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 23 18:45:05 compute-0 systemd[1]: Started ovn_controller container.
Jan 23 18:45:05 compute-0 systemd[1]: Started Session c1 of User root.
Jan 23 18:45:05 compute-0 sudo[145497]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:05 compute-0 ovn_controller[145557]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 18:45:05 compute-0 ovn_controller[145557]: INFO:__main__:Validating config file
Jan 23 18:45:05 compute-0 ovn_controller[145557]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 18:45:05 compute-0 ovn_controller[145557]: INFO:__main__:Writing out command to execute
Jan 23 18:45:06 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 23 18:45:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:45:06 compute-0 ovn_controller[145557]: ++ cat /run_command
Jan 23 18:45:06 compute-0 ovn_controller[145557]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 23 18:45:06 compute-0 ovn_controller[145557]: + ARGS=
Jan 23 18:45:06 compute-0 ovn_controller[145557]: + sudo kolla_copy_cacerts
Jan 23 18:45:06 compute-0 systemd[1]: Started Session c2 of User root.
Jan 23 18:45:06 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 23 18:45:06 compute-0 ovn_controller[145557]: + [[ ! -n '' ]]
Jan 23 18:45:06 compute-0 ovn_controller[145557]: + . kolla_extend_start
Jan 23 18:45:06 compute-0 ovn_controller[145557]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 23 18:45:06 compute-0 ovn_controller[145557]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 23 18:45:06 compute-0 ovn_controller[145557]: + umask 0022
Jan 23 18:45:06 compute-0 ovn_controller[145557]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 23 18:45:06 compute-0 NetworkManager[48925]: <info>  [1769193906.0750] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 23 18:45:06 compute-0 NetworkManager[48925]: <info>  [1769193906.0755] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 18:45:06 compute-0 NetworkManager[48925]: <warn>  [1769193906.0758] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 18:45:06 compute-0 NetworkManager[48925]: <info>  [1769193906.0763] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 23 18:45:06 compute-0 NetworkManager[48925]: <info>  [1769193906.0767] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 23 18:45:06 compute-0 NetworkManager[48925]: <info>  [1769193906.0770] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 23 18:45:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:06 compute-0 kernel: br-int: entered promiscuous mode
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 23 18:45:06 compute-0 ovn_controller[145557]: 2026-01-23T18:45:06Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 23 18:45:06 compute-0 NetworkManager[48925]: <info>  [1769193906.1028] manager: (ovn-5d41fa-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 23 18:45:06 compute-0 systemd-udevd[145693]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 18:45:06 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 23 18:45:06 compute-0 systemd-udevd[145695]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 18:45:06 compute-0 NetworkManager[48925]: <info>  [1769193906.1179] device (genev_sys_6081): carrier: link connected
Jan 23 18:45:06 compute-0 NetworkManager[48925]: <info>  [1769193906.1182] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 23 18:45:06 compute-0 python3.9[145823]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 23 18:45:07 compute-0 ceph-mon[75097]: pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:07 compute-0 sudo[145973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mujfbnicplncozgscpdllvkustalxnlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193907.1363375-619-219031763735/AnsiballZ_stat.py'
Jan 23 18:45:07 compute-0 sudo[145973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:07 compute-0 python3.9[145975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:45:07 compute-0 sudo[145973]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:07 compute-0 sudo[146096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivkdxbuhebhhhmsqnvuxztufbubgsvhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193907.1363375-619-219031763735/AnsiballZ_copy.py'
Jan 23 18:45:07 compute-0 sudo[146096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:08 compute-0 python3.9[146098]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769193907.1363375-619-219031763735/.source.yaml _original_basename=.wzuysd2_ follow=False checksum=02604153b3ae4bd682af07aed1df38e6581047db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:45:08 compute-0 sudo[146096]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:08 compute-0 ceph-mon[75097]: pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:08 compute-0 sudo[146248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toimqtakbihbsvtbegwwpqzjaszjpskk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193908.3235562-634-213308842002677/AnsiballZ_command.py'
Jan 23 18:45:08 compute-0 sudo[146248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:08 compute-0 python3.9[146250]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:45:08 compute-0 ovs-vsctl[146251]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 23 18:45:08 compute-0 sudo[146248]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:09 compute-0 sudo[146401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swrhiohvulryeosyybmwedcdlhwuxzhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193909.0866559-642-181471547531014/AnsiballZ_command.py'
Jan 23 18:45:09 compute-0 sudo[146401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:09 compute-0 python3.9[146403]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:45:09 compute-0 ovs-vsctl[146405]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 23 18:45:09 compute-0 sudo[146401]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:10 compute-0 sudo[146556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayivbjqaugsgwzsntrxrfugscivzhtal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193909.9250913-656-198987693169685/AnsiballZ_command.py'
Jan 23 18:45:10 compute-0 sudo[146556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:10 compute-0 python3.9[146558]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:45:10 compute-0 ovs-vsctl[146559]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 23 18:45:10 compute-0 sudo[146556]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:10 compute-0 sshd-session[134369]: Connection closed by 192.168.122.30 port 45556
Jan 23 18:45:10 compute-0 sshd-session[134366]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:45:10 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Jan 23 18:45:10 compute-0 systemd[1]: session-45.scope: Consumed 59.651s CPU time.
Jan 23 18:45:10 compute-0 systemd-logind[792]: Session 45 logged out. Waiting for processes to exit.
Jan 23 18:45:10 compute-0 systemd-logind[792]: Removed session 45.
Jan 23 18:45:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:45:11 compute-0 ceph-mon[75097]: pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:12 compute-0 ceph-mon[75097]: pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:45:14.149433) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193914149515, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 930, "num_deletes": 251, "total_data_size": 1336511, "memory_usage": 1358160, "flush_reason": "Manual Compaction"}
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193914166961, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1324620, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8848, "largest_seqno": 9777, "table_properties": {"data_size": 1319964, "index_size": 2244, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 9656, "raw_average_key_size": 18, "raw_value_size": 1310716, "raw_average_value_size": 2555, "num_data_blocks": 105, "num_entries": 513, "num_filter_entries": 513, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193811, "oldest_key_time": 1769193811, "file_creation_time": 1769193914, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 17568 microseconds, and 7971 cpu microseconds.
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:45:14.167015) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1324620 bytes OK
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:45:14.167037) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:45:14.169151) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:45:14.169175) EVENT_LOG_v1 {"time_micros": 1769193914169168, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:45:14.169196) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1332030, prev total WAL file size 1332030, number of live WAL files 2.
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:45:14.170259) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1293KB)], [23(6874KB)]
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193914170318, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8364561, "oldest_snapshot_seqno": -1}
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3350 keys, 6422400 bytes, temperature: kUnknown
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193914226673, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6422400, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6397879, "index_size": 15068, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 81278, "raw_average_key_size": 24, "raw_value_size": 6335144, "raw_average_value_size": 1891, "num_data_blocks": 657, "num_entries": 3350, "num_filter_entries": 3350, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 1769193914, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:45:14.227069) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6422400 bytes
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:45:14.228765) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.0 rd, 113.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 6.7 +0.0 blob) out(6.1 +0.0 blob), read-write-amplify(11.2) write-amplify(4.8) OK, records in: 3864, records dropped: 514 output_compression: NoCompression
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:45:14.228802) EVENT_LOG_v1 {"time_micros": 1769193914228784, "job": 8, "event": "compaction_finished", "compaction_time_micros": 56516, "compaction_time_cpu_micros": 22795, "output_level": 6, "num_output_files": 1, "total_output_size": 6422400, "num_input_records": 3864, "num_output_records": 3350, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193914229857, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769193914232781, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:45:14.170145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:45:14.232911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:45:14.232914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:45:14.232916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:45:14.232917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:45:14 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:45:14.232919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:45:15 compute-0 ceph-mon[75097]: pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:15 compute-0 sshd-session[146584]: Accepted publickey for zuul from 192.168.122.30 port 58654 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:45:15 compute-0 systemd-logind[792]: New session 47 of user zuul.
Jan 23 18:45:15 compute-0 systemd[1]: Started Session 47 of User zuul.
Jan 23 18:45:15 compute-0 sshd-session[146584]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:45:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:45:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:16 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 23 18:45:16 compute-0 systemd[145577]: Activating special unit Exit the Session...
Jan 23 18:45:16 compute-0 systemd[145577]: Stopped target Main User Target.
Jan 23 18:45:16 compute-0 systemd[145577]: Stopped target Basic System.
Jan 23 18:45:16 compute-0 systemd[145577]: Stopped target Paths.
Jan 23 18:45:16 compute-0 systemd[145577]: Stopped target Sockets.
Jan 23 18:45:16 compute-0 systemd[145577]: Stopped target Timers.
Jan 23 18:45:16 compute-0 systemd[145577]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 23 18:45:16 compute-0 systemd[145577]: Closed D-Bus User Message Bus Socket.
Jan 23 18:45:16 compute-0 systemd[145577]: Stopped Create User's Volatile Files and Directories.
Jan 23 18:45:16 compute-0 systemd[145577]: Removed slice User Application Slice.
Jan 23 18:45:16 compute-0 systemd[145577]: Reached target Shutdown.
Jan 23 18:45:16 compute-0 systemd[145577]: Finished Exit the Session.
Jan 23 18:45:16 compute-0 systemd[145577]: Reached target Exit the Session.
Jan 23 18:45:16 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 23 18:45:16 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 23 18:45:16 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 23 18:45:16 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 23 18:45:16 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 23 18:45:16 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 23 18:45:16 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 23 18:45:16 compute-0 python3.9[146739]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:45:17 compute-0 ceph-mon[75097]: pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:18 compute-0 sudo[146893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myrklkbulfnagwatkseiuttnrjiwyggh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193917.702866-29-214435944040932/AnsiballZ_file.py'
Jan 23 18:45:18 compute-0 sudo[146893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:18 compute-0 python3.9[146895]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:45:18 compute-0 sudo[146893]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:18 compute-0 sudo[147045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsucapyrbmuabcycdnqainebtqpmvymb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193918.5224736-29-216439138408178/AnsiballZ_file.py'
Jan 23 18:45:18 compute-0 sudo[147045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:19 compute-0 python3.9[147047]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:45:19 compute-0 sudo[147045]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:19 compute-0 ceph-mon[75097]: pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:19 compute-0 sudo[147197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijstysydapzxcwmzdpnkjqzeryemcldh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193919.1624517-29-183210220622686/AnsiballZ_file.py'
Jan 23 18:45:19 compute-0 sudo[147197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:19 compute-0 python3.9[147199]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:45:19 compute-0 sudo[147197]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:20 compute-0 sudo[147349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jigdefpfwwiojevgikhvayisommjtkal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193919.8253977-29-92972128055124/AnsiballZ_file.py'
Jan 23 18:45:20 compute-0 sudo[147349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:20 compute-0 python3.9[147351]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:45:20 compute-0 sudo[147349]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:20 compute-0 sudo[147501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nurqaujsdvqqbxsuzlxsktnoretrsbxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193920.514142-29-258407646455076/AnsiballZ_file.py'
Jan 23 18:45:20 compute-0 sudo[147501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:45:21 compute-0 python3.9[147503]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:45:21 compute-0 sudo[147501]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:21 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 18:45:21 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5543 writes, 24K keys, 5543 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5543 writes, 847 syncs, 6.54 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5543 writes, 24K keys, 5543 commit groups, 1.0 writes per commit group, ingest: 18.63 MB, 0.03 MB/s
                                           Interval WAL: 5543 writes, 847 syncs, 6.54 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 18:45:21 compute-0 ceph-mon[75097]: pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:21 compute-0 python3.9[147653]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:45:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:22 compute-0 sudo[147803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjfcvhaljvedkdxhheaxrzoaobbzpfvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193922.0466893-73-63454213840103/AnsiballZ_seboolean.py'
Jan 23 18:45:22 compute-0 sudo[147803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:22 compute-0 python3.9[147805]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 23 18:45:23 compute-0 ceph-mon[75097]: pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:23 compute-0 sudo[147803]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:24 compute-0 python3.9[147955]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:45:24 compute-0 python3.9[148077]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769193923.5726695-81-33282247080560/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:45:25 compute-0 ceph-mon[75097]: pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:25 compute-0 python3.9[148227]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:45:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:45:26 compute-0 python3.9[148348]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769193925.1242206-96-277240240386396/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:45:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:26 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 18:45:26 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 6806 writes, 28K keys, 6806 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6806 writes, 1252 syncs, 5.44 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6806 writes, 28K keys, 6806 commit groups, 1.0 writes per commit group, ingest: 19.71 MB, 0.03 MB/s
                                           Interval WAL: 6806 writes, 1252 syncs, 5.44 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b27a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b27a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b27a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 18:45:26 compute-0 ceph-mon[75097]: pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:26 compute-0 sudo[148386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:45:26 compute-0 sudo[148386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:45:26 compute-0 sudo[148386]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:26 compute-0 sudo[148435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 23 18:45:26 compute-0 sudo[148435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:45:26 compute-0 sudo[148548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqlbstgeezypuulcbkcddnjkecemdmqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193926.359105-113-261198761824747/AnsiballZ_setup.py'
Jan 23 18:45:26 compute-0 sudo[148548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:26 compute-0 sudo[148435]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:45:26 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:45:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:45:26 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:45:26 compute-0 sudo[148571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:45:26 compute-0 sudo[148571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:45:26 compute-0 sudo[148571]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:26 compute-0 sudo[148596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:45:26 compute-0 sudo[148596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:45:26 compute-0 python3.9[148550]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:45:27 compute-0 sudo[148548]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:27 compute-0 sudo[148596]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:45:27 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:45:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:45:27 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:45:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:45:27 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:45:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:45:27 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:45:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:45:27 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:45:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:45:27 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:45:27 compute-0 sudo[148706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:45:27 compute-0 sudo[148706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:45:27 compute-0 sudo[148706]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:27 compute-0 sudo[148757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tthdefgfjdxvmjcbruddqhmamsubsrjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193926.359105-113-261198761824747/AnsiballZ_dnf.py'
Jan 23 18:45:27 compute-0 sudo[148757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:27 compute-0 sudo[148758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:45:27 compute-0 sudo[148758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:45:27 compute-0 python3.9[148771]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:45:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:45:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:45:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:45:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:45:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:45:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:45:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:45:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:45:27 compute-0 podman[148799]: 2026-01-23 18:45:27.794311038 +0000 UTC m=+0.020806202 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:45:27 compute-0 podman[148799]: 2026-01-23 18:45:27.890743761 +0000 UTC m=+0.117238945 container create 06c8130aea1cd3b75c1bd119bd0a6a69fd2130ac53c706fc832e5bdc960925c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:45:27 compute-0 systemd[1]: Started libpod-conmon-06c8130aea1cd3b75c1bd119bd0a6a69fd2130ac53c706fc832e5bdc960925c9.scope.
Jan 23 18:45:28 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:45:28 compute-0 podman[148799]: 2026-01-23 18:45:28.058002734 +0000 UTC m=+0.284497908 container init 06c8130aea1cd3b75c1bd119bd0a6a69fd2130ac53c706fc832e5bdc960925c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 23 18:45:28 compute-0 podman[148799]: 2026-01-23 18:45:28.06503761 +0000 UTC m=+0.291532754 container start 06c8130aea1cd3b75c1bd119bd0a6a69fd2130ac53c706fc832e5bdc960925c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hermann, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:45:28 compute-0 podman[148799]: 2026-01-23 18:45:28.068781764 +0000 UTC m=+0.295276908 container attach 06c8130aea1cd3b75c1bd119bd0a6a69fd2130ac53c706fc832e5bdc960925c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hermann, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 18:45:28 compute-0 mystifying_hermann[148815]: 167 167
Jan 23 18:45:28 compute-0 systemd[1]: libpod-06c8130aea1cd3b75c1bd119bd0a6a69fd2130ac53c706fc832e5bdc960925c9.scope: Deactivated successfully.
Jan 23 18:45:28 compute-0 podman[148799]: 2026-01-23 18:45:28.07341895 +0000 UTC m=+0.299914104 container died 06c8130aea1cd3b75c1bd119bd0a6a69fd2130ac53c706fc832e5bdc960925c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hermann, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:45:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-24916a34d4758ecfcae1456dc6c8743864b2119b58d88c4aa545cd3ec147b0a6-merged.mount: Deactivated successfully.
Jan 23 18:45:28 compute-0 podman[148799]: 2026-01-23 18:45:28.131732169 +0000 UTC m=+0.358227313 container remove 06c8130aea1cd3b75c1bd119bd0a6a69fd2130ac53c706fc832e5bdc960925c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 23 18:45:28 compute-0 systemd[1]: libpod-conmon-06c8130aea1cd3b75c1bd119bd0a6a69fd2130ac53c706fc832e5bdc960925c9.scope: Deactivated successfully.
Jan 23 18:45:28 compute-0 podman[148837]: 2026-01-23 18:45:28.306356818 +0000 UTC m=+0.036937646 container create 219b6f297720906482dd6ad2f244e14c5d27900b124838549f40dbc214b1c9fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:45:28 compute-0 systemd[1]: Started libpod-conmon-219b6f297720906482dd6ad2f244e14c5d27900b124838549f40dbc214b1c9fa.scope.
Jan 23 18:45:28 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:45:28 compute-0 podman[148837]: 2026-01-23 18:45:28.289996368 +0000 UTC m=+0.020577256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60c19fc1f7b103f39f0a864e69673603c7615a627766350d0e61b2e3ccde5eab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60c19fc1f7b103f39f0a864e69673603c7615a627766350d0e61b2e3ccde5eab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60c19fc1f7b103f39f0a864e69673603c7615a627766350d0e61b2e3ccde5eab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60c19fc1f7b103f39f0a864e69673603c7615a627766350d0e61b2e3ccde5eab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60c19fc1f7b103f39f0a864e69673603c7615a627766350d0e61b2e3ccde5eab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:45:28 compute-0 podman[148837]: 2026-01-23 18:45:28.404214236 +0000 UTC m=+0.134795124 container init 219b6f297720906482dd6ad2f244e14c5d27900b124838549f40dbc214b1c9fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 18:45:28 compute-0 podman[148837]: 2026-01-23 18:45:28.414452882 +0000 UTC m=+0.145033720 container start 219b6f297720906482dd6ad2f244e14c5d27900b124838549f40dbc214b1c9fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euclid, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 18:45:28 compute-0 podman[148837]: 2026-01-23 18:45:28.428615046 +0000 UTC m=+0.159195934 container attach 219b6f297720906482dd6ad2f244e14c5d27900b124838549f40dbc214b1c9fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:45:28 compute-0 youthful_euclid[148854]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:45:28 compute-0 youthful_euclid[148854]: --> All data devices are unavailable
Jan 23 18:45:28 compute-0 ceph-mon[75097]: pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:45:28
Jan 23 18:45:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:45:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:45:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'default.rgw.log', 'volumes', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta']
Jan 23 18:45:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:45:28 compute-0 systemd[1]: libpod-219b6f297720906482dd6ad2f244e14c5d27900b124838549f40dbc214b1c9fa.scope: Deactivated successfully.
Jan 23 18:45:28 compute-0 podman[148837]: 2026-01-23 18:45:28.900600113 +0000 UTC m=+0.631180981 container died 219b6f297720906482dd6ad2f244e14c5d27900b124838549f40dbc214b1c9fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 23 18:45:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-60c19fc1f7b103f39f0a864e69673603c7615a627766350d0e61b2e3ccde5eab-merged.mount: Deactivated successfully.
Jan 23 18:45:28 compute-0 podman[148837]: 2026-01-23 18:45:28.952763008 +0000 UTC m=+0.683343846 container remove 219b6f297720906482dd6ad2f244e14c5d27900b124838549f40dbc214b1c9fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euclid, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 18:45:28 compute-0 systemd[1]: libpod-conmon-219b6f297720906482dd6ad2f244e14c5d27900b124838549f40dbc214b1c9fa.scope: Deactivated successfully.
Jan 23 18:45:29 compute-0 sudo[148758]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:29 compute-0 sudo[148884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:45:29 compute-0 sudo[148884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:45:29 compute-0 sudo[148884]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:29 compute-0 sudo[148757]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:29 compute-0 sudo[148909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:45:29 compute-0 sudo[148909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:45:29 compute-0 podman[149022]: 2026-01-23 18:45:29.420014287 +0000 UTC m=+0.049245413 container create 914bdb66ac9a60287fff5d87a78d174fbfceedc9676de0263161c136a43ec592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euclid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 18:45:29 compute-0 systemd[1]: Started libpod-conmon-914bdb66ac9a60287fff5d87a78d174fbfceedc9676de0263161c136a43ec592.scope.
Jan 23 18:45:29 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:45:29 compute-0 podman[149022]: 2026-01-23 18:45:29.40014203 +0000 UTC m=+0.029373186 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:45:29 compute-0 podman[149022]: 2026-01-23 18:45:29.501578077 +0000 UTC m=+0.130809213 container init 914bdb66ac9a60287fff5d87a78d174fbfceedc9676de0263161c136a43ec592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euclid, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:45:29 compute-0 podman[149022]: 2026-01-23 18:45:29.509613658 +0000 UTC m=+0.138844784 container start 914bdb66ac9a60287fff5d87a78d174fbfceedc9676de0263161c136a43ec592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 23 18:45:29 compute-0 podman[149022]: 2026-01-23 18:45:29.51326392 +0000 UTC m=+0.142495036 container attach 914bdb66ac9a60287fff5d87a78d174fbfceedc9676de0263161c136a43ec592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:45:29 compute-0 hardcore_euclid[149039]: 167 167
Jan 23 18:45:29 compute-0 systemd[1]: libpod-914bdb66ac9a60287fff5d87a78d174fbfceedc9676de0263161c136a43ec592.scope: Deactivated successfully.
Jan 23 18:45:29 compute-0 podman[149022]: 2026-01-23 18:45:29.515624988 +0000 UTC m=+0.144856154 container died 914bdb66ac9a60287fff5d87a78d174fbfceedc9676de0263161c136a43ec592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euclid, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 18:45:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-0de7ee5373bb2e02ee78809f35f77f59ae4c2ada99aa53fc37bcdddf996117a7-merged.mount: Deactivated successfully.
Jan 23 18:45:29 compute-0 podman[149022]: 2026-01-23 18:45:29.561082736 +0000 UTC m=+0.190313882 container remove 914bdb66ac9a60287fff5d87a78d174fbfceedc9676de0263161c136a43ec592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 18:45:29 compute-0 systemd[1]: libpod-conmon-914bdb66ac9a60287fff5d87a78d174fbfceedc9676de0263161c136a43ec592.scope: Deactivated successfully.
Jan 23 18:45:29 compute-0 podman[149071]: 2026-01-23 18:45:29.736058743 +0000 UTC m=+0.053694705 container create 9b4419d8b290674e8527af74f0842e73bbbb6eaec96c6cd34f23d70ec0a39afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_greider, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:45:29 compute-0 systemd[1]: Started libpod-conmon-9b4419d8b290674e8527af74f0842e73bbbb6eaec96c6cd34f23d70ec0a39afb.scope.
Jan 23 18:45:29 compute-0 podman[149071]: 2026-01-23 18:45:29.706296349 +0000 UTC m=+0.023932341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:45:29 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3103ee1298f7780df9e8c874772f0594607ae34b21d1aca42428b80cf0c15165/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3103ee1298f7780df9e8c874772f0594607ae34b21d1aca42428b80cf0c15165/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3103ee1298f7780df9e8c874772f0594607ae34b21d1aca42428b80cf0c15165/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3103ee1298f7780df9e8c874772f0594607ae34b21d1aca42428b80cf0c15165/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:45:29 compute-0 podman[149071]: 2026-01-23 18:45:29.829591353 +0000 UTC m=+0.147227345 container init 9b4419d8b290674e8527af74f0842e73bbbb6eaec96c6cd34f23d70ec0a39afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_greider, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:45:29 compute-0 podman[149071]: 2026-01-23 18:45:29.838335452 +0000 UTC m=+0.155971404 container start 9b4419d8b290674e8527af74f0842e73bbbb6eaec96c6cd34f23d70ec0a39afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 18:45:29 compute-0 podman[149071]: 2026-01-23 18:45:29.842774253 +0000 UTC m=+0.160410205 container attach 9b4419d8b290674e8527af74f0842e73bbbb6eaec96c6cd34f23d70ec0a39afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_greider, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:45:29 compute-0 sudo[149156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgvokkbzmvxhbshadecxnhrbnhsuyfzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193929.2810407-125-62913756294733/AnsiballZ_systemd.py'
Jan 23 18:45:29 compute-0 sudo[149156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:45:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:45:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:45:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:45:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:45:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:45:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:30 compute-0 distracted_greider[149116]: {
Jan 23 18:45:30 compute-0 distracted_greider[149116]:     "0": [
Jan 23 18:45:30 compute-0 distracted_greider[149116]:         {
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "devices": [
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "/dev/loop3"
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             ],
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "lv_name": "ceph_lv0",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "lv_size": "21470642176",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "name": "ceph_lv0",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "tags": {
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.cluster_name": "ceph",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.crush_device_class": "",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.encrypted": "0",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.objectstore": "bluestore",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.osd_id": "0",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.type": "block",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.vdo": "0",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.with_tpm": "0"
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             },
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "type": "block",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "vg_name": "ceph_vg0"
Jan 23 18:45:30 compute-0 distracted_greider[149116]:         }
Jan 23 18:45:30 compute-0 distracted_greider[149116]:     ],
Jan 23 18:45:30 compute-0 distracted_greider[149116]:     "1": [
Jan 23 18:45:30 compute-0 distracted_greider[149116]:         {
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "devices": [
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "/dev/loop4"
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             ],
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "lv_name": "ceph_lv1",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "lv_size": "21470642176",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "name": "ceph_lv1",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "tags": {
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.cluster_name": "ceph",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.crush_device_class": "",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.encrypted": "0",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.objectstore": "bluestore",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.osd_id": "1",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.type": "block",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.vdo": "0",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.with_tpm": "0"
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             },
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "type": "block",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "vg_name": "ceph_vg1"
Jan 23 18:45:30 compute-0 distracted_greider[149116]:         }
Jan 23 18:45:30 compute-0 distracted_greider[149116]:     ],
Jan 23 18:45:30 compute-0 distracted_greider[149116]:     "2": [
Jan 23 18:45:30 compute-0 distracted_greider[149116]:         {
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "devices": [
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "/dev/loop5"
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             ],
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "lv_name": "ceph_lv2",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "lv_size": "21470642176",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "name": "ceph_lv2",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "tags": {
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.cluster_name": "ceph",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.crush_device_class": "",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.encrypted": "0",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.objectstore": "bluestore",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.osd_id": "2",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.type": "block",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.vdo": "0",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:                 "ceph.with_tpm": "0"
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             },
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "type": "block",
Jan 23 18:45:30 compute-0 distracted_greider[149116]:             "vg_name": "ceph_vg2"
Jan 23 18:45:30 compute-0 distracted_greider[149116]:         }
Jan 23 18:45:30 compute-0 distracted_greider[149116]:     ]
Jan 23 18:45:30 compute-0 distracted_greider[149116]: }
Jan 23 18:45:30 compute-0 systemd[1]: libpod-9b4419d8b290674e8527af74f0842e73bbbb6eaec96c6cd34f23d70ec0a39afb.scope: Deactivated successfully.
Jan 23 18:45:30 compute-0 podman[149071]: 2026-01-23 18:45:30.162510221 +0000 UTC m=+0.480146163 container died 9b4419d8b290674e8527af74f0842e73bbbb6eaec96c6cd34f23d70ec0a39afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_greider, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 18:45:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3103ee1298f7780df9e8c874772f0594607ae34b21d1aca42428b80cf0c15165-merged.mount: Deactivated successfully.
Jan 23 18:45:30 compute-0 podman[149071]: 2026-01-23 18:45:30.205863256 +0000 UTC m=+0.523499168 container remove 9b4419d8b290674e8527af74f0842e73bbbb6eaec96c6cd34f23d70ec0a39afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_greider, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 18:45:30 compute-0 systemd[1]: libpod-conmon-9b4419d8b290674e8527af74f0842e73bbbb6eaec96c6cd34f23d70ec0a39afb.scope: Deactivated successfully.
Jan 23 18:45:30 compute-0 sudo[148909]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:30 compute-0 python3.9[149158]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 18:45:30 compute-0 sudo[149174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:45:30 compute-0 sudo[149174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:45:30 compute-0 sudo[149174]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:30 compute-0 sudo[149156]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:30 compute-0 sudo[149202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:45:30 compute-0 sudo[149202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:45:30 compute-0 podman[149314]: 2026-01-23 18:45:30.657208837 +0000 UTC m=+0.048806292 container create 02faaed136ece7f75c0d79cca90ca0d9cbd57ca35d306853e68a15e6af9755cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:45:30 compute-0 systemd[1]: Started libpod-conmon-02faaed136ece7f75c0d79cca90ca0d9cbd57ca35d306853e68a15e6af9755cc.scope.
Jan 23 18:45:30 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:45:30 compute-0 podman[149314]: 2026-01-23 18:45:30.644488989 +0000 UTC m=+0.036086494 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:45:30 compute-0 podman[149314]: 2026-01-23 18:45:30.740958082 +0000 UTC m=+0.132555577 container init 02faaed136ece7f75c0d79cca90ca0d9cbd57ca35d306853e68a15e6af9755cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:45:30 compute-0 podman[149314]: 2026-01-23 18:45:30.749698521 +0000 UTC m=+0.141295996 container start 02faaed136ece7f75c0d79cca90ca0d9cbd57ca35d306853e68a15e6af9755cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_heyrovsky, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 18:45:30 compute-0 podman[149314]: 2026-01-23 18:45:30.753578298 +0000 UTC m=+0.145175793 container attach 02faaed136ece7f75c0d79cca90ca0d9cbd57ca35d306853e68a15e6af9755cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_heyrovsky, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:45:30 compute-0 sad_heyrovsky[149361]: 167 167
Jan 23 18:45:30 compute-0 systemd[1]: libpod-02faaed136ece7f75c0d79cca90ca0d9cbd57ca35d306853e68a15e6af9755cc.scope: Deactivated successfully.
Jan 23 18:45:30 compute-0 podman[149314]: 2026-01-23 18:45:30.75770528 +0000 UTC m=+0.149302735 container died 02faaed136ece7f75c0d79cca90ca0d9cbd57ca35d306853e68a15e6af9755cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:45:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-92fafdeec387fd78daa03162d60740ed176f8c2e56089f1cf65b5ca6b1a74029-merged.mount: Deactivated successfully.
Jan 23 18:45:30 compute-0 podman[149314]: 2026-01-23 18:45:30.805398725 +0000 UTC m=+0.196996180 container remove 02faaed136ece7f75c0d79cca90ca0d9cbd57ca35d306853e68a15e6af9755cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_heyrovsky, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:45:30 compute-0 systemd[1]: libpod-conmon-02faaed136ece7f75c0d79cca90ca0d9cbd57ca35d306853e68a15e6af9755cc.scope: Deactivated successfully.
Jan 23 18:45:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:45:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:45:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:45:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:45:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:45:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:45:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:45:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:45:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:45:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:45:30 compute-0 podman[149432]: 2026-01-23 18:45:30.967763276 +0000 UTC m=+0.044839923 container create 665d34a0ce9e2b90c9eeb1e7b11f62a8c637e675afa4abe3599ca22ba3ade75c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_antonelli, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:45:31 compute-0 python3.9[149420]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:45:31 compute-0 systemd[1]: Started libpod-conmon-665d34a0ce9e2b90c9eeb1e7b11f62a8c637e675afa4abe3599ca22ba3ade75c.scope.
Jan 23 18:45:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:45:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61c1c15f29ced00a2cf7ca37a1881f7b7d011b7e9ad1da96c4884c19fe2ed0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61c1c15f29ced00a2cf7ca37a1881f7b7d011b7e9ad1da96c4884c19fe2ed0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61c1c15f29ced00a2cf7ca37a1881f7b7d011b7e9ad1da96c4884c19fe2ed0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61c1c15f29ced00a2cf7ca37a1881f7b7d011b7e9ad1da96c4884c19fe2ed0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:45:31 compute-0 podman[149432]: 2026-01-23 18:45:30.94834748 +0000 UTC m=+0.025424147 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:45:31 compute-0 podman[149432]: 2026-01-23 18:45:31.048621488 +0000 UTC m=+0.125698155 container init 665d34a0ce9e2b90c9eeb1e7b11f62a8c637e675afa4abe3599ca22ba3ade75c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:45:31 compute-0 podman[149432]: 2026-01-23 18:45:31.05464883 +0000 UTC m=+0.131725467 container start 665d34a0ce9e2b90c9eeb1e7b11f62a8c637e675afa4abe3599ca22ba3ade75c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_antonelli, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 23 18:45:31 compute-0 podman[149432]: 2026-01-23 18:45:31.059301376 +0000 UTC m=+0.136378013 container attach 665d34a0ce9e2b90c9eeb1e7b11f62a8c637e675afa4abe3599ca22ba3ade75c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_antonelli, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 23 18:45:31 compute-0 ceph-mon[75097]: pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:31 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 18:45:31 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Cumulative writes: 5436 writes, 23K keys, 5436 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5436 writes, 801 syncs, 6.79 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5436 writes, 23K keys, 5436 commit groups, 1.0 writes per commit group, ingest: 18.34 MB, 0.03 MB/s
                                           Interval WAL: 5436 writes, 801 syncs, 6.79 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 18:45:31 compute-0 python3.9[149584]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769193930.5187957-133-175822160470945/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:45:31 compute-0 lvm[149696]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:45:31 compute-0 lvm[149696]: VG ceph_vg0 finished
Jan 23 18:45:31 compute-0 lvm[149704]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:45:31 compute-0 lvm[149704]: VG ceph_vg1 finished
Jan 23 18:45:31 compute-0 lvm[149727]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:45:31 compute-0 lvm[149727]: VG ceph_vg2 finished
Jan 23 18:45:31 compute-0 reverent_antonelli[149449]: {}
Jan 23 18:45:31 compute-0 systemd[1]: libpod-665d34a0ce9e2b90c9eeb1e7b11f62a8c637e675afa4abe3599ca22ba3ade75c.scope: Deactivated successfully.
Jan 23 18:45:31 compute-0 systemd[1]: libpod-665d34a0ce9e2b90c9eeb1e7b11f62a8c637e675afa4abe3599ca22ba3ade75c.scope: Consumed 1.234s CPU time.
Jan 23 18:45:31 compute-0 podman[149432]: 2026-01-23 18:45:31.854465298 +0000 UTC m=+0.931541935 container died 665d34a0ce9e2b90c9eeb1e7b11f62a8c637e675afa4abe3599ca22ba3ade75c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d61c1c15f29ced00a2cf7ca37a1881f7b7d011b7e9ad1da96c4884c19fe2ed0b-merged.mount: Deactivated successfully.
Jan 23 18:45:31 compute-0 podman[149432]: 2026-01-23 18:45:31.8965268 +0000 UTC m=+0.973603437 container remove 665d34a0ce9e2b90c9eeb1e7b11f62a8c637e675afa4abe3599ca22ba3ade75c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:45:31 compute-0 systemd[1]: libpod-conmon-665d34a0ce9e2b90c9eeb1e7b11f62a8c637e675afa4abe3599ca22ba3ade75c.scope: Deactivated successfully.
Jan 23 18:45:31 compute-0 sudo[149202]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:45:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:45:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:45:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:45:32 compute-0 sudo[149816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:45:32 compute-0 sudo[149816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:45:32 compute-0 sudo[149816]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:32 compute-0 python3.9[149815]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:45:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:32 compute-0 python3.9[149961]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769193931.6577835-133-276638698447741/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:45:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:45:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:45:32 compute-0 ceph-mon[75097]: pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:33 compute-0 python3.9[150111]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:45:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:34 compute-0 python3.9[150232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769193933.2467022-177-201336559649600/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:45:34 compute-0 ceph-mgr[75390]: [devicehealth INFO root] Check health
Jan 23 18:45:34 compute-0 python3.9[150382]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:45:35 compute-0 ceph-mon[75097]: pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:35 compute-0 python3.9[150503]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769193934.3426313-177-29309907137202/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:45:35 compute-0 ovn_controller[145557]: 2026-01-23T18:45:35Z|00025|memory|INFO|16256 kB peak resident set size after 29.7 seconds
Jan 23 18:45:35 compute-0 ovn_controller[145557]: 2026-01-23T18:45:35Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 23 18:45:35 compute-0 podman[150627]: 2026-01-23 18:45:35.840659267 +0000 UTC m=+0.098745202 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:45:35 compute-0 python3.9[150669]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:45:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:45:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:36 compute-0 sudo[150831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbijrsxsdwghidggnfcjkuyzurmidbnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193936.187173-215-149238748662099/AnsiballZ_file.py'
Jan 23 18:45:36 compute-0 sudo[150831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:36 compute-0 python3.9[150833]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:45:36 compute-0 sudo[150831]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:37 compute-0 ceph-mon[75097]: pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:37 compute-0 sudo[150983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhwceihbawufismczpbfuwfiathaznok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193936.8576088-223-276449856014675/AnsiballZ_stat.py'
Jan 23 18:45:37 compute-0 sudo[150983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:37 compute-0 python3.9[150985]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:45:37 compute-0 sudo[150983]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:37 compute-0 sudo[151061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-algvglzkncztgdycqbcnqndaimjuyftr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193936.8576088-223-276449856014675/AnsiballZ_file.py'
Jan 23 18:45:37 compute-0 sudo[151061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:37 compute-0 python3.9[151063]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:45:37 compute-0 sudo[151061]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:38 compute-0 sudo[151213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rufacpcfptfnspbdjhbxtxcrcdbqarsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193938.0783942-223-190089698828179/AnsiballZ_stat.py'
Jan 23 18:45:38 compute-0 sudo[151213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:38 compute-0 python3.9[151215]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:45:38 compute-0 sudo[151213]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:38 compute-0 sudo[151291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glzrleibigeuygmrnsnlfjevuyxyhjbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193938.0783942-223-190089698828179/AnsiballZ_file.py'
Jan 23 18:45:38 compute-0 sudo[151291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:39 compute-0 python3.9[151293]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:45:39 compute-0 sudo[151291]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:39 compute-0 ceph-mon[75097]: pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:39 compute-0 sudo[151443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmskzzahshqhevgpvlyczheziaywqgst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193939.2313457-246-240796357656412/AnsiballZ_file.py'
Jan 23 18:45:39 compute-0 sudo[151443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:39 compute-0 python3.9[151445]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:45:39 compute-0 sudo[151443]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:45:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:45:40 compute-0 sudo[151595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnsqcbemwuqivzvtayretajwtwzpqhux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193939.996803-254-197748362076340/AnsiballZ_stat.py'
Jan 23 18:45:40 compute-0 sudo[151595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:40 compute-0 python3.9[151597]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:45:40 compute-0 sudo[151595]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:40 compute-0 sudo[151673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqndwxopuzbhlnqgpxcibfcbnrlrwuns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193939.996803-254-197748362076340/AnsiballZ_file.py'
Jan 23 18:45:40 compute-0 sudo[151673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:40 compute-0 python3.9[151675]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:45:40 compute-0 sudo[151673]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:45:41 compute-0 ceph-mon[75097]: pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:41 compute-0 sudo[151825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntrwrdtzpzecsrcoybtaztllkkmwcfol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193941.0992496-266-123186641494182/AnsiballZ_stat.py'
Jan 23 18:45:41 compute-0 sudo[151825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:41 compute-0 python3.9[151827]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:45:41 compute-0 sudo[151825]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:41 compute-0 sudo[151903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czginpgpivwhmnjfvxsztjimwkqntbtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193941.0992496-266-123186641494182/AnsiballZ_file.py'
Jan 23 18:45:41 compute-0 sudo[151903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:42 compute-0 python3.9[151905]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:45:42 compute-0 sudo[151903]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:42 compute-0 sudo[152055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbcmoxnfbslpxdijushnhdrrbabcnaur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193942.3128572-278-102633090082305/AnsiballZ_systemd.py'
Jan 23 18:45:42 compute-0 sudo[152055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:42 compute-0 python3.9[152057]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:45:42 compute-0 systemd[1]: Reloading.
Jan 23 18:45:43 compute-0 systemd-sysv-generator[152086]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:45:43 compute-0 systemd-rc-local-generator[152082]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:45:43 compute-0 ceph-mon[75097]: pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:43 compute-0 sudo[152055]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:43 compute-0 sudo[152243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxuzosxyinxvhbiapppyigsihuthugoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193943.501483-286-261506992554394/AnsiballZ_stat.py'
Jan 23 18:45:43 compute-0 sudo[152243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:43 compute-0 python3.9[152245]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:45:43 compute-0 sudo[152243]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:44 compute-0 sudo[152321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eciweqwypmrbssyamlvwgjnzhmairexy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193943.501483-286-261506992554394/AnsiballZ_file.py'
Jan 23 18:45:44 compute-0 sudo[152321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:44 compute-0 python3.9[152323]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:45:44 compute-0 sudo[152321]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:44 compute-0 sudo[152473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzgkjiihxqfymohqzsdtzogszlpniacj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193944.59311-298-218513078640871/AnsiballZ_stat.py'
Jan 23 18:45:44 compute-0 sudo[152473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:45 compute-0 python3.9[152475]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:45:45 compute-0 sudo[152473]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:45 compute-0 ceph-mon[75097]: pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:45 compute-0 sudo[152551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klrmlhpzvcnqrgpodflpfoqknivnhigz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193944.59311-298-218513078640871/AnsiballZ_file.py'
Jan 23 18:45:45 compute-0 sudo[152551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:45 compute-0 python3.9[152553]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:45:45 compute-0 sudo[152551]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:45:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:46 compute-0 sudo[152703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpytexepcoqlwuprfwdxqlqdrgoscakf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193945.8811963-310-168551622902457/AnsiballZ_systemd.py'
Jan 23 18:45:46 compute-0 sudo[152703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:46 compute-0 python3.9[152705]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:45:46 compute-0 systemd[1]: Reloading.
Jan 23 18:45:46 compute-0 systemd-rc-local-generator[152734]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:45:46 compute-0 systemd-sysv-generator[152737]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:45:46 compute-0 systemd[1]: Starting Create netns directory...
Jan 23 18:45:46 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 23 18:45:46 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 23 18:45:46 compute-0 systemd[1]: Finished Create netns directory.
Jan 23 18:45:46 compute-0 sudo[152703]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:47 compute-0 ceph-mon[75097]: pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:47 compute-0 sudo[152897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnzhlycobdalbqgghlnrotklpkksjetv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193947.1301444-320-187870640459576/AnsiballZ_file.py'
Jan 23 18:45:47 compute-0 sudo[152897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:47 compute-0 python3.9[152899]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:45:47 compute-0 sudo[152897]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:48 compute-0 sudo[153049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svxiirjtcoddkfhsktqzzmwuedcaledd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193947.7805557-328-262166501755053/AnsiballZ_stat.py'
Jan 23 18:45:48 compute-0 sudo[153049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:48 compute-0 python3.9[153051]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:45:48 compute-0 sudo[153049]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:48 compute-0 sudo[153172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeiemenxsuftlgxhkzgojsyleqyqbioa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193947.7805557-328-262166501755053/AnsiballZ_copy.py'
Jan 23 18:45:48 compute-0 sudo[153172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:49 compute-0 ceph-mon[75097]: pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:49 compute-0 python3.9[153174]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769193947.7805557-328-262166501755053/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:45:49 compute-0 sudo[153172]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:50 compute-0 sudo[153324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgemvbwyolsdiqeueaopynehsxidgjko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193950.2195315-345-30757358771775/AnsiballZ_file.py'
Jan 23 18:45:50 compute-0 sudo[153324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:50 compute-0 ceph-mon[75097]: pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:50 compute-0 python3.9[153326]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:45:50 compute-0 sudo[153324]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:45:51 compute-0 sudo[153476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gchywembjpepxnimyeezhqrxnbjavmvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193950.9749725-353-80770214680637/AnsiballZ_file.py'
Jan 23 18:45:51 compute-0 sudo[153476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:51 compute-0 python3.9[153478]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:45:51 compute-0 sudo[153476]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:51 compute-0 sudo[153628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxtmdzkfjnllqcmsxsosamufwcjkjjuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193951.6457183-361-229550958494783/AnsiballZ_stat.py'
Jan 23 18:45:51 compute-0 sudo[153628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:52 compute-0 python3.9[153630]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:45:52 compute-0 sudo[153628]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:52 compute-0 sudo[153751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rximbqjjjctaxudruaaqvlsmjwopdcov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193951.6457183-361-229550958494783/AnsiballZ_copy.py'
Jan 23 18:45:52 compute-0 sudo[153751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:52 compute-0 python3.9[153753]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769193951.6457183-361-229550958494783/.source.json _original_basename=.upnxqx8p follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:45:52 compute-0 sudo[153751]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:53 compute-0 ceph-mon[75097]: pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:53 compute-0 python3.9[153903]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:45:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:55 compute-0 sudo[154324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udlbbhexoqkjznquxpvlhjyuxzbebvql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193954.7881622-401-82336152033536/AnsiballZ_container_config_data.py'
Jan 23 18:45:55 compute-0 sudo[154324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:55 compute-0 ceph-mon[75097]: pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:55 compute-0 python3.9[154326]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 23 18:45:55 compute-0 sudo[154324]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:45:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:56 compute-0 sudo[154476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fujodtirlvinntywwqxtcbzovnmlbzpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193955.7037776-412-33548779038185/AnsiballZ_container_config_hash.py'
Jan 23 18:45:56 compute-0 sudo[154476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:56 compute-0 python3.9[154478]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 18:45:56 compute-0 sudo[154476]: pam_unix(sudo:session): session closed for user root
Jan 23 18:45:57 compute-0 sudo[154628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duitmecdxlgxqnivcatagqjcoxkfsoyh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769193956.6792176-422-78499760232166/AnsiballZ_edpm_container_manage.py'
Jan 23 18:45:57 compute-0 sudo[154628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:45:57 compute-0 python3[154630]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 18:45:57 compute-0 ceph-mon[75097]: pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:45:58 compute-0 ceph-mon[75097]: pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:46:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:46:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:46:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:46:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:46:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:46:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:46:01 compute-0 ceph-mon[75097]: pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:02 compute-0 ceph-mon[75097]: pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:04 compute-0 ceph-mon[75097]: pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:46:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:06 compute-0 ceph-mon[75097]: pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:06 compute-0 podman[154643]: 2026-01-23 18:46:06.564122629 +0000 UTC m=+9.025805991 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 18:46:06 compute-0 podman[154736]: 2026-01-23 18:46:06.598290304 +0000 UTC m=+0.663813207 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 18:46:06 compute-0 podman[154797]: 2026-01-23 18:46:06.775130118 +0000 UTC m=+0.076330571 container create 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 18:46:06 compute-0 podman[154797]: 2026-01-23 18:46:06.737498007 +0000 UTC m=+0.038698550 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 18:46:06 compute-0 python3[154630]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 18:46:06 compute-0 sudo[154628]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:07 compute-0 sudo[154985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djnjexczilnziliciwmtuwchgvbvxinh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193967.088529-430-40569926029821/AnsiballZ_stat.py'
Jan 23 18:46:07 compute-0 sudo[154985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:07 compute-0 python3.9[154987]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:46:07 compute-0 sudo[154985]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:08 compute-0 sudo[155139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogzzoqlndybkkzwtvfjmiyrqxsmegspo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193967.8208642-439-138386162666835/AnsiballZ_file.py'
Jan 23 18:46:08 compute-0 sudo[155139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:08 compute-0 python3.9[155141]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:46:08 compute-0 sudo[155139]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:08 compute-0 sudo[155215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anhyznfjbfutygcubghgasxupotmjrhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193967.8208642-439-138386162666835/AnsiballZ_stat.py'
Jan 23 18:46:08 compute-0 sudo[155215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:08 compute-0 python3.9[155217]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:46:08 compute-0 sudo[155215]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:09 compute-0 sudo[155366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geeqrmqopqymzrosoewggdwujkcpgoqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193968.7516615-439-215279037371993/AnsiballZ_copy.py'
Jan 23 18:46:09 compute-0 sudo[155366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:09 compute-0 ceph-mon[75097]: pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:09 compute-0 python3.9[155368]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769193968.7516615-439-215279037371993/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:46:09 compute-0 sudo[155366]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:09 compute-0 sudo[155442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmkuslxngcigfnbduxgjuzwrpxyasqbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193968.7516615-439-215279037371993/AnsiballZ_systemd.py'
Jan 23 18:46:09 compute-0 sudo[155442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:09 compute-0 python3.9[155444]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 18:46:09 compute-0 systemd[1]: Reloading.
Jan 23 18:46:09 compute-0 systemd-rc-local-generator[155469]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:46:09 compute-0 systemd-sysv-generator[155472]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:46:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:10 compute-0 sudo[155442]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:10 compute-0 sudo[155553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oanugcllyuatgmbksbyrqndxaqyboynq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193968.7516615-439-215279037371993/AnsiballZ_systemd.py'
Jan 23 18:46:10 compute-0 sudo[155553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:10 compute-0 python3.9[155555]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:46:10 compute-0 systemd[1]: Reloading.
Jan 23 18:46:10 compute-0 systemd-sysv-generator[155589]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:46:10 compute-0 systemd-rc-local-generator[155584]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:46:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:46:11 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 23 18:46:11 compute-0 ceph-mon[75097]: pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:11 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:46:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aceeb7bd2b939f57815a6ca46848c08ddcab3d562c2f4f24414a087bc1bcf22/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 23 18:46:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aceeb7bd2b939f57815a6ca46848c08ddcab3d562c2f4f24414a087bc1bcf22/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 18:46:11 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3.
Jan 23 18:46:11 compute-0 podman[155596]: 2026-01-23 18:46:11.504454667 +0000 UTC m=+0.409733350 container init 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: + sudo -E kolla_set_configs
Jan 23 18:46:11 compute-0 podman[155596]: 2026-01-23 18:46:11.531024042 +0000 UTC m=+0.436302705 container start 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: INFO:__main__:Validating config file
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: INFO:__main__:Copying service configuration files
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: INFO:__main__:Writing out command to execute
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: ++ cat /run_command
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: + CMD=neutron-ovn-metadata-agent
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: + ARGS=
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: + sudo kolla_copy_cacerts
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: + [[ ! -n '' ]]
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: + . kolla_extend_start
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: Running command: 'neutron-ovn-metadata-agent'
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: + umask 0022
Jan 23 18:46:11 compute-0 ovn_metadata_agent[155611]: + exec neutron-ovn-metadata-agent
Jan 23 18:46:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.522 155616 INFO neutron.common.config [-] Logging enabled!
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.522 155616 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.522 155616 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.523 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.523 155616 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.523 155616 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.523 155616 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.523 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.523 155616 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.523 155616 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.523 155616 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.524 155616 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.524 155616 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.524 155616 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.524 155616 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.524 155616 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.524 155616 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.524 155616 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.524 155616 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.525 155616 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.525 155616 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.525 155616 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.525 155616 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.525 155616 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.525 155616 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.525 155616 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.525 155616 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.525 155616 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.526 155616 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.526 155616 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.526 155616 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.526 155616 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.526 155616 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.526 155616 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.526 155616 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.526 155616 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.527 155616 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.527 155616 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.527 155616 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.527 155616 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.527 155616 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.527 155616 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.528 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.528 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.528 155616 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.528 155616 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.528 155616 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.528 155616 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.528 155616 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.528 155616 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.528 155616 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.528 155616 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.529 155616 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.529 155616 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.529 155616 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.529 155616 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.529 155616 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.529 155616 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.529 155616 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.529 155616 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.529 155616 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.529 155616 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.530 155616 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.530 155616 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.530 155616 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.530 155616 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.530 155616 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.530 155616 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.530 155616 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.530 155616 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.531 155616 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.531 155616 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.531 155616 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.531 155616 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.531 155616 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.531 155616 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.531 155616 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.531 155616 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.532 155616 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.532 155616 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.532 155616 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.532 155616 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.532 155616 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.532 155616 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.532 155616 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.533 155616 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.533 155616 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.533 155616 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.533 155616 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.533 155616 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.533 155616 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.533 155616 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.533 155616 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.533 155616 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.533 155616 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.534 155616 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.534 155616 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.534 155616 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.534 155616 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.534 155616 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.534 155616 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.534 155616 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.534 155616 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.534 155616 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.534 155616 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.535 155616 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.535 155616 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.535 155616 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.535 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.535 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.535 155616 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.535 155616 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.536 155616 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.536 155616 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.536 155616 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.536 155616 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.536 155616 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.536 155616 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.536 155616 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.536 155616 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.537 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.537 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.537 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.537 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.537 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.537 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.537 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.537 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.537 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.537 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.538 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.538 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.538 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.538 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.538 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.538 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.538 155616 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.538 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.539 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.539 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.539 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.539 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.539 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.539 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.539 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.539 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.539 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.540 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.540 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.540 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.540 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.540 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.540 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.540 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.540 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.540 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.541 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.541 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.541 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.541 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.541 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.541 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.541 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.541 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.542 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.542 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.542 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.542 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.542 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.542 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.542 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.542 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.542 155616 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.543 155616 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.543 155616 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.543 155616 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.543 155616 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.543 155616 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.543 155616 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.543 155616 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.543 155616 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.543 155616 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.543 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.544 155616 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.544 155616 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.544 155616 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.544 155616 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.544 155616 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.544 155616 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.544 155616 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.544 155616 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.545 155616 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.545 155616 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.545 155616 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.545 155616 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.545 155616 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.545 155616 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.545 155616 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.545 155616 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.545 155616 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.546 155616 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.546 155616 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.546 155616 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.546 155616 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.546 155616 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.546 155616 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.546 155616 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.546 155616 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.546 155616 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.547 155616 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.547 155616 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.547 155616 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.547 155616 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.547 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.547 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.547 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.547 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.548 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.548 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.548 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.548 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.548 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.548 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.548 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.548 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.549 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.549 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.549 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.549 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.549 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.549 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.550 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.550 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.550 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.550 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.550 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.550 155616 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.550 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.550 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.551 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.551 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.551 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.551 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.551 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.551 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.551 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.551 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.552 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.552 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.552 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.552 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.552 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.552 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.552 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.553 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.553 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.553 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.553 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.553 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.553 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.553 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.554 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.554 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.554 155616 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.554 155616 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.554 155616 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.554 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.554 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.555 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.555 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.555 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.555 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.555 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.555 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.555 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.555 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.555 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.556 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.556 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.556 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.556 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.556 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.556 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.556 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.556 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.557 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.557 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.557 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.557 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.557 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.557 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.557 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.558 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.558 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.558 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.558 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.558 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.558 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.558 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.558 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.559 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.559 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.559 155616 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.559 155616 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.567 155616 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.567 155616 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.567 155616 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.568 155616 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.568 155616 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.579 155616 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name d273cd83-f2f8-4b91-b18f-957051ff2a6c (UUID: d273cd83-f2f8-4b91-b18f-957051ff2a6c) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.613 155616 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.614 155616 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.614 155616 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.614 155616 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.617 155616 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.623 155616 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.632 155616 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'd273cd83-f2f8-4b91-b18f-957051ff2a6c'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f3195bf2b80>], external_ids={}, name=d273cd83-f2f8-4b91-b18f-957051ff2a6c, nb_cfg_timestamp=1769193914103, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.633 155616 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f3195b74c10>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.634 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.635 155616 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.635 155616 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.635 155616 INFO oslo_service.service [-] Starting 1 workers
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.641 155616 DEBUG oslo_service.service [-] Started child 155639 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.644 155639 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-433897'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.646 155616 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpipioruvf/privsep.sock']
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.669 155639 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.669 155639 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.670 155639 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.673 155639 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.679 155639 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 23 18:46:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:13.686 155639 INFO eventlet.wsgi.server [-] (155639) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 23 18:46:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:14 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 23 18:46:14 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:14.335 155616 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 23 18:46:14 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:14.336 155616 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpipioruvf/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 23 18:46:14 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:14.222 155644 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 23 18:46:14 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:14.226 155644 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 23 18:46:14 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:14.228 155644 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 23 18:46:14 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:14.229 155644 INFO oslo.privsep.daemon [-] privsep daemon running as pid 155644
Jan 23 18:46:14 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:14.338 155644 DEBUG oslo.privsep.daemon [-] privsep: reply[e6248654-4681-42be-8f42-3d798c2987cd]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 18:46:14 compute-0 edpm-start-podman-container[155596]: ovn_metadata_agent
Jan 23 18:46:14 compute-0 edpm-start-podman-container[155595]: Creating additional drop-in dependency for "ovn_metadata_agent" (064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3)
Jan 23 18:46:14 compute-0 systemd[1]: Reloading.
Jan 23 18:46:14 compute-0 podman[155618]: 2026-01-23 18:46:14.674646418 +0000 UTC m=+3.133650378 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 23 18:46:14 compute-0 systemd-sysv-generator[155705]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:46:14 compute-0 systemd-rc-local-generator[155702]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:46:14 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:14.863 155644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:46:14 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:14.863 155644 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:46:14 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:14.864 155644 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:46:15 compute-0 ceph-mon[75097]: pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.387 155644 DEBUG oslo.privsep.daemon [-] privsep: reply[739b4c50-019d-4000-b026-1d4d8a4f5e26]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.390 155616 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=d273cd83-f2f8-4b91-b18f-957051ff2a6c, column=external_ids, values=({'neutron:ovn-metadata-id': 'fc04c3c1-e594-5327-9258-a86dec946259'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.405 155616 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d273cd83-f2f8-4b91-b18f-957051ff2a6c, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.416 155616 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.416 155616 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.417 155616 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.417 155616 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.417 155616 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.417 155616 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.418 155616 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.418 155616 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.419 155616 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.419 155616 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.420 155616 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.420 155616 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.420 155616 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.421 155616 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.421 155616 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.422 155616 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.422 155616 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.422 155616 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.423 155616 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.423 155616 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.423 155616 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.424 155616 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.424 155616 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.424 155616 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.425 155616 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.425 155616 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.426 155616 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.426 155616 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.427 155616 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.427 155616 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.427 155616 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.428 155616 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.428 155616 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.428 155616 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.429 155616 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.429 155616 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.429 155616 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.430 155616 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.430 155616 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.431 155616 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.431 155616 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.431 155616 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.432 155616 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.432 155616 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.432 155616 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.433 155616 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.433 155616 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.433 155616 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.434 155616 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.434 155616 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.434 155616 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.435 155616 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.435 155616 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.435 155616 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.436 155616 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.436 155616 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.436 155616 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.437 155616 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.437 155616 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.437 155616 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.438 155616 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.438 155616 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.438 155616 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.439 155616 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.439 155616 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.439 155616 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.440 155616 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.440 155616 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.440 155616 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.441 155616 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.441 155616 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.441 155616 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.442 155616 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.442 155616 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.442 155616 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.443 155616 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.443 155616 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.444 155616 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.444 155616 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.444 155616 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.445 155616 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.445 155616 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.445 155616 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.446 155616 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.446 155616 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.446 155616 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.447 155616 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.447 155616 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.447 155616 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.448 155616 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.448 155616 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.448 155616 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.449 155616 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.449 155616 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.449 155616 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.450 155616 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.450 155616 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.450 155616 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.451 155616 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.451 155616 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.451 155616 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.452 155616 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.452 155616 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.453 155616 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.453 155616 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.453 155616 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.454 155616 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.454 155616 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.454 155616 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.455 155616 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.455 155616 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.456 155616 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.456 155616 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.456 155616 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.456 155616 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.457 155616 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.457 155616 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.457 155616 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.457 155616 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.457 155616 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.458 155616 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.458 155616 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.458 155616 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.458 155616 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.459 155616 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.459 155616 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.459 155616 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.459 155616 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.460 155616 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.460 155616 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.460 155616 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.460 155616 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.460 155616 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.461 155616 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.461 155616 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.461 155616 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.461 155616 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.462 155616 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.462 155616 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.462 155616 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.462 155616 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.462 155616 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.463 155616 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.463 155616 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.463 155616 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.463 155616 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.463 155616 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.464 155616 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.464 155616 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.464 155616 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.464 155616 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.464 155616 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.465 155616 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.465 155616 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.465 155616 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.465 155616 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.466 155616 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.466 155616 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.466 155616 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.466 155616 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.466 155616 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.466 155616 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.467 155616 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.467 155616 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.467 155616 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.467 155616 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.467 155616 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.468 155616 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.468 155616 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.468 155616 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.468 155616 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.469 155616 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.469 155616 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.469 155616 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.469 155616 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.470 155616 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.470 155616 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.470 155616 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.470 155616 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.471 155616 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.471 155616 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.471 155616 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.471 155616 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.472 155616 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.472 155616 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.472 155616 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.472 155616 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.472 155616 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.473 155616 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.473 155616 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.473 155616 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.473 155616 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.474 155616 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.474 155616 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.474 155616 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.474 155616 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.474 155616 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.475 155616 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.475 155616 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.475 155616 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.475 155616 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.475 155616 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.476 155616 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.476 155616 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.476 155616 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.476 155616 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.476 155616 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.477 155616 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.477 155616 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.477 155616 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.477 155616 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.477 155616 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.478 155616 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.478 155616 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.478 155616 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.478 155616 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.478 155616 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.479 155616 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.479 155616 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.479 155616 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.479 155616 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.479 155616 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.480 155616 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.480 155616 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.480 155616 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.480 155616 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.480 155616 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.481 155616 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.481 155616 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.481 155616 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.481 155616 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.481 155616 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.482 155616 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.482 155616 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.482 155616 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.482 155616 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.482 155616 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.483 155616 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.483 155616 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.483 155616 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.483 155616 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.484 155616 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.484 155616 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.484 155616 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.484 155616 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.485 155616 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.485 155616 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.485 155616 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.485 155616 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.485 155616 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.486 155616 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.486 155616 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.486 155616 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.486 155616 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.487 155616 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.487 155616 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.487 155616 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.487 155616 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.487 155616 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.488 155616 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.488 155616 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.488 155616 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.488 155616 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.488 155616 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.489 155616 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.489 155616 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.489 155616 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.489 155616 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.490 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.490 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.490 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.490 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.490 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.491 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.491 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.491 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.491 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.491 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.492 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.492 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.492 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.492 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.492 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.492 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.492 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.492 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.492 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.493 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.493 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.493 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.493 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.493 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.493 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.493 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.493 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.493 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.494 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.494 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.494 155616 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.494 155616 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.494 155616 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.494 155616 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.494 155616 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:46:15 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:46:15.494 155616 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 23 18:46:15 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 23 18:46:15 compute-0 sudo[155553]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:46:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:16 compute-0 python3.9[155860]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 23 18:46:16 compute-0 ceph-mon[75097]: pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:16 compute-0 ceph-mon[75097]: pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:17 compute-0 sudo[156010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmihsircgjuwzmakkssxekwdpunrlldl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193976.995326-484-15377489419171/AnsiballZ_stat.py'
Jan 23 18:46:17 compute-0 sudo[156010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:17 compute-0 python3.9[156012]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:46:17 compute-0 sudo[156010]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:17 compute-0 sudo[156135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvnycmxsdfpcjuxvdzqpcuylcdojpfwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193976.995326-484-15377489419171/AnsiballZ_copy.py'
Jan 23 18:46:17 compute-0 sudo[156135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:18 compute-0 python3.9[156137]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769193976.995326-484-15377489419171/.source.yaml _original_basename=.6pdzk0xk follow=False checksum=78710bbf476a8d2ee07c7b4163e63e8dc9f22758 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:46:18 compute-0 sudo[156135]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:18 compute-0 ceph-mon[75097]: pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:18 compute-0 sshd-session[146587]: Connection closed by 192.168.122.30 port 58654
Jan 23 18:46:18 compute-0 sshd-session[146584]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:46:18 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Jan 23 18:46:18 compute-0 systemd[1]: session-47.scope: Consumed 56.328s CPU time.
Jan 23 18:46:18 compute-0 systemd-logind[792]: Session 47 logged out. Waiting for processes to exit.
Jan 23 18:46:18 compute-0 systemd-logind[792]: Removed session 47.
Jan 23 18:46:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:20 compute-0 ceph-mon[75097]: pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:46:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:23 compute-0 ceph-mon[75097]: pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:23 compute-0 sshd-session[156162]: Accepted publickey for zuul from 192.168.122.30 port 34690 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:46:23 compute-0 systemd-logind[792]: New session 48 of user zuul.
Jan 23 18:46:23 compute-0 systemd[1]: Started Session 48 of User zuul.
Jan 23 18:46:23 compute-0 sshd-session[156162]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:46:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:24 compute-0 python3.9[156315]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:46:25 compute-0 ceph-mon[75097]: pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:25 compute-0 sudo[156469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdfcoizsuylsfzrtsyegsfktnivzmajb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193985.4557507-29-109862539754688/AnsiballZ_command.py'
Jan 23 18:46:25 compute-0 sudo[156469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:46:26 compute-0 python3.9[156471]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:46:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:26 compute-0 sudo[156469]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:27 compute-0 sudo[156634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgmliizmzwmjtpqiwzmqongypckvsalq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193986.4452012-40-281322315484339/AnsiballZ_systemd_service.py'
Jan 23 18:46:27 compute-0 sudo[156634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:27 compute-0 ceph-mon[75097]: pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:27 compute-0 python3.9[156636]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 18:46:27 compute-0 systemd[1]: Reloading.
Jan 23 18:46:27 compute-0 systemd-sysv-generator[156667]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:46:27 compute-0 systemd-rc-local-generator[156664]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:46:27 compute-0 sudo[156634]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:28 compute-0 python3.9[156821]: ansible-ansible.builtin.service_facts Invoked
Jan 23 18:46:28 compute-0 network[156838]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 18:46:28 compute-0 network[156839]: 'network-scripts' will be removed from distribution in near future.
Jan 23 18:46:28 compute-0 network[156840]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 18:46:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:46:28
Jan 23 18:46:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:46:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:46:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'backups', 'images', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control']
Jan 23 18:46:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:46:29 compute-0 ceph-mon[75097]: pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:46:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:46:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:46:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:46:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:46:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:46:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:46:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:46:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:46:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:46:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:46:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:46:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:46:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:46:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:46:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:46:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:46:31 compute-0 ceph-mon[75097]: pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:31 compute-0 sudo[157100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqhlmjdifdynezvidigwniajtgfbeoez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193991.3541613-59-56201311185070/AnsiballZ_systemd_service.py'
Jan 23 18:46:31 compute-0 sudo[157100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:31 compute-0 python3.9[157102]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:46:32 compute-0 sudo[157100]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:32 compute-0 sudo[157104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:46:32 compute-0 sudo[157104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:46:32 compute-0 sudo[157104]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:32 compute-0 sudo[157153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:46:32 compute-0 sudo[157153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:46:32 compute-0 sudo[157317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fywgniwjhcvayvlokfigdqdsghlytelz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193992.1398585-59-29812957501763/AnsiballZ_systemd_service.py'
Jan 23 18:46:32 compute-0 sudo[157317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:32 compute-0 sudo[157153]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 23 18:46:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 23 18:46:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:46:32 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:46:32 compute-0 python3.9[157319]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:46:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:46:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:46:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:46:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:46:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:46:32 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:46:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:46:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:46:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:46:32 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:46:32 compute-0 sudo[157317]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:32 compute-0 sudo[157338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:46:32 compute-0 sudo[157338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:46:32 compute-0 sudo[157338]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:32 compute-0 sudo[157365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:46:32 compute-0 sudo[157365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:46:33 compute-0 podman[157522]: 2026-01-23 18:46:33.134195831 +0000 UTC m=+0.049669948 container create 13d996d21f17b7c91e3775d6dceb2dd2e2c3a0e78171291037cd537a431451ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_faraday, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:46:33 compute-0 sudo[157563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epbaqretmhllchurqdgnvtvfnwrbdiok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193992.903934-59-86224532101978/AnsiballZ_systemd_service.py'
Jan 23 18:46:33 compute-0 sudo[157563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:33 compute-0 systemd[1]: Started libpod-conmon-13d996d21f17b7c91e3775d6dceb2dd2e2c3a0e78171291037cd537a431451ba.scope.
Jan 23 18:46:33 compute-0 podman[157522]: 2026-01-23 18:46:33.109226004 +0000 UTC m=+0.024700041 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:46:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:46:33 compute-0 podman[157522]: 2026-01-23 18:46:33.244455816 +0000 UTC m=+0.159929883 container init 13d996d21f17b7c91e3775d6dceb2dd2e2c3a0e78171291037cd537a431451ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_faraday, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 18:46:33 compute-0 podman[157522]: 2026-01-23 18:46:33.256137059 +0000 UTC m=+0.171611076 container start 13d996d21f17b7c91e3775d6dceb2dd2e2c3a0e78171291037cd537a431451ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_faraday, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:46:33 compute-0 podman[157522]: 2026-01-23 18:46:33.260330565 +0000 UTC m=+0.175804582 container attach 13d996d21f17b7c91e3775d6dceb2dd2e2c3a0e78171291037cd537a431451ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_faraday, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 18:46:33 compute-0 pedantic_faraday[157568]: 167 167
Jan 23 18:46:33 compute-0 systemd[1]: libpod-13d996d21f17b7c91e3775d6dceb2dd2e2c3a0e78171291037cd537a431451ba.scope: Deactivated successfully.
Jan 23 18:46:33 compute-0 podman[157522]: 2026-01-23 18:46:33.275850794 +0000 UTC m=+0.191324841 container died 13d996d21f17b7c91e3775d6dceb2dd2e2c3a0e78171291037cd537a431451ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_faraday, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 23 18:46:33 compute-0 ceph-mon[75097]: pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 23 18:46:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:46:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:46:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:46:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:46:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:46:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:46:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6de7c0970ec174cc2a5f9856401621228455b2617a2b1d3d70b6681e22f868b6-merged.mount: Deactivated successfully.
Jan 23 18:46:33 compute-0 python3.9[157567]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:46:33 compute-0 podman[157522]: 2026-01-23 18:46:33.484290803 +0000 UTC m=+0.399764820 container remove 13d996d21f17b7c91e3775d6dceb2dd2e2c3a0e78171291037cd537a431451ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_faraday, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 18:46:33 compute-0 sudo[157563]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:33 compute-0 systemd[1]: libpod-conmon-13d996d21f17b7c91e3775d6dceb2dd2e2c3a0e78171291037cd537a431451ba.scope: Deactivated successfully.
Jan 23 18:46:33 compute-0 podman[157616]: 2026-01-23 18:46:33.664985055 +0000 UTC m=+0.046020285 container create b81e45677e9da615264552e62ea42e72942b3068beba22f0fd2aa0f84c2abcad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:46:33 compute-0 systemd[1]: Started libpod-conmon-b81e45677e9da615264552e62ea42e72942b3068beba22f0fd2aa0f84c2abcad.scope.
Jan 23 18:46:33 compute-0 podman[157616]: 2026-01-23 18:46:33.645683291 +0000 UTC m=+0.026718551 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:46:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb6c4e4e20caae5ca6e0d97d1cd31a31909295a52dea993e003f3029ebf70f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb6c4e4e20caae5ca6e0d97d1cd31a31909295a52dea993e003f3029ebf70f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb6c4e4e20caae5ca6e0d97d1cd31a31909295a52dea993e003f3029ebf70f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb6c4e4e20caae5ca6e0d97d1cd31a31909295a52dea993e003f3029ebf70f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb6c4e4e20caae5ca6e0d97d1cd31a31909295a52dea993e003f3029ebf70f8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:46:33 compute-0 podman[157616]: 2026-01-23 18:46:33.764356688 +0000 UTC m=+0.145391978 container init b81e45677e9da615264552e62ea42e72942b3068beba22f0fd2aa0f84c2abcad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_lovelace, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:46:33 compute-0 podman[157616]: 2026-01-23 18:46:33.773829875 +0000 UTC m=+0.154865115 container start b81e45677e9da615264552e62ea42e72942b3068beba22f0fd2aa0f84c2abcad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 18:46:33 compute-0 podman[157616]: 2026-01-23 18:46:33.777975539 +0000 UTC m=+0.159010799 container attach b81e45677e9da615264552e62ea42e72942b3068beba22f0fd2aa0f84c2abcad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_lovelace, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 18:46:33 compute-0 sudo[157762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlpewtcysrbwvhqaqwtlneyppxgtycwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193993.6543057-59-206689075908473/AnsiballZ_systemd_service.py'
Jan 23 18:46:33 compute-0 sudo[157762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 23 18:46:34 compute-0 beautiful_lovelace[157684]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:46:34 compute-0 beautiful_lovelace[157684]: --> All data devices are unavailable
Jan 23 18:46:34 compute-0 systemd[1]: libpod-b81e45677e9da615264552e62ea42e72942b3068beba22f0fd2aa0f84c2abcad.scope: Deactivated successfully.
Jan 23 18:46:34 compute-0 podman[157616]: 2026-01-23 18:46:34.28031677 +0000 UTC m=+0.661352020 container died b81e45677e9da615264552e62ea42e72942b3068beba22f0fd2aa0f84c2abcad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 23 18:46:34 compute-0 python3.9[157766]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:46:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdb6c4e4e20caae5ca6e0d97d1cd31a31909295a52dea993e003f3029ebf70f8-merged.mount: Deactivated successfully.
Jan 23 18:46:34 compute-0 sudo[157762]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:34 compute-0 podman[157616]: 2026-01-23 18:46:34.32973727 +0000 UTC m=+0.710772510 container remove b81e45677e9da615264552e62ea42e72942b3068beba22f0fd2aa0f84c2abcad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_lovelace, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 23 18:46:34 compute-0 systemd[1]: libpod-conmon-b81e45677e9da615264552e62ea42e72942b3068beba22f0fd2aa0f84c2abcad.scope: Deactivated successfully.
Jan 23 18:46:34 compute-0 sudo[157365]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:34 compute-0 sudo[157817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:46:34 compute-0 sudo[157817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:46:34 compute-0 sudo[157817]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:34 compute-0 sudo[157865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:46:34 compute-0 sudo[157865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:46:34 compute-0 ceph-mon[75097]: pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 23 18:46:34 compute-0 sudo[157999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-claqukrrgstwersjscfqtzydjnpvardd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193994.439659-59-130800643265429/AnsiballZ_systemd_service.py'
Jan 23 18:46:34 compute-0 sudo[157999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:34 compute-0 podman[158007]: 2026-01-23 18:46:34.770526637 +0000 UTC m=+0.046010415 container create e6c332662af44ded57daca5c72248556337b38cc5b05ddbf8bb5f294994b9af1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_goldberg, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:46:34 compute-0 systemd[1]: Started libpod-conmon-e6c332662af44ded57daca5c72248556337b38cc5b05ddbf8bb5f294994b9af1.scope.
Jan 23 18:46:34 compute-0 podman[158007]: 2026-01-23 18:46:34.750914255 +0000 UTC m=+0.026398053 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:46:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:46:34 compute-0 podman[158007]: 2026-01-23 18:46:34.862808092 +0000 UTC m=+0.138291870 container init e6c332662af44ded57daca5c72248556337b38cc5b05ddbf8bb5f294994b9af1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:46:34 compute-0 podman[158007]: 2026-01-23 18:46:34.868920826 +0000 UTC m=+0.144404604 container start e6c332662af44ded57daca5c72248556337b38cc5b05ddbf8bb5f294994b9af1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_goldberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:46:34 compute-0 eager_goldberg[158023]: 167 167
Jan 23 18:46:34 compute-0 systemd[1]: libpod-e6c332662af44ded57daca5c72248556337b38cc5b05ddbf8bb5f294994b9af1.scope: Deactivated successfully.
Jan 23 18:46:34 compute-0 podman[158007]: 2026-01-23 18:46:34.874944687 +0000 UTC m=+0.150428485 container attach e6c332662af44ded57daca5c72248556337b38cc5b05ddbf8bb5f294994b9af1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_goldberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:46:34 compute-0 podman[158007]: 2026-01-23 18:46:34.875393368 +0000 UTC m=+0.150877166 container died e6c332662af44ded57daca5c72248556337b38cc5b05ddbf8bb5f294994b9af1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:46:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b55d12ad7b59d0d1243c5de748f81b38eeb39314b441c87d64a8116e31799546-merged.mount: Deactivated successfully.
Jan 23 18:46:34 compute-0 podman[158007]: 2026-01-23 18:46:34.923288099 +0000 UTC m=+0.198771867 container remove e6c332662af44ded57daca5c72248556337b38cc5b05ddbf8bb5f294994b9af1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_goldberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 23 18:46:34 compute-0 systemd[1]: libpod-conmon-e6c332662af44ded57daca5c72248556337b38cc5b05ddbf8bb5f294994b9af1.scope: Deactivated successfully.
Jan 23 18:46:35 compute-0 python3.9[158006]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:46:35 compute-0 sudo[157999]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:35 compute-0 podman[158048]: 2026-01-23 18:46:35.121913502 +0000 UTC m=+0.040439996 container create 95c767ca24eea5ac45b5953a01b994a865a580bfb431e42d3991f7a4d5e4be4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chandrasekhar, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:46:35 compute-0 systemd[1]: Started libpod-conmon-95c767ca24eea5ac45b5953a01b994a865a580bfb431e42d3991f7a4d5e4be4a.scope.
Jan 23 18:46:35 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d17544232a61d71bb5624e3622f6bc8f119b7d9fdac010b0a2894ea967531b95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d17544232a61d71bb5624e3622f6bc8f119b7d9fdac010b0a2894ea967531b95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d17544232a61d71bb5624e3622f6bc8f119b7d9fdac010b0a2894ea967531b95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d17544232a61d71bb5624e3622f6bc8f119b7d9fdac010b0a2894ea967531b95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:46:35 compute-0 podman[158048]: 2026-01-23 18:46:35.104486915 +0000 UTC m=+0.023013419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:46:35 compute-0 podman[158048]: 2026-01-23 18:46:35.201035737 +0000 UTC m=+0.119562261 container init 95c767ca24eea5ac45b5953a01b994a865a580bfb431e42d3991f7a4d5e4be4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 23 18:46:35 compute-0 podman[158048]: 2026-01-23 18:46:35.215172132 +0000 UTC m=+0.133698636 container start 95c767ca24eea5ac45b5953a01b994a865a580bfb431e42d3991f7a4d5e4be4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:46:35 compute-0 podman[158048]: 2026-01-23 18:46:35.219288255 +0000 UTC m=+0.137814749 container attach 95c767ca24eea5ac45b5953a01b994a865a580bfb431e42d3991f7a4d5e4be4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chandrasekhar, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]: {
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:     "0": [
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:         {
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "devices": [
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "/dev/loop3"
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             ],
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "lv_name": "ceph_lv0",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "lv_size": "21470642176",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "name": "ceph_lv0",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "tags": {
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.cluster_name": "ceph",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.crush_device_class": "",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.encrypted": "0",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.objectstore": "bluestore",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.osd_id": "0",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.type": "block",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.vdo": "0",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.with_tpm": "0"
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             },
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "type": "block",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "vg_name": "ceph_vg0"
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:         }
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:     ],
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:     "1": [
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:         {
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "devices": [
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "/dev/loop4"
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             ],
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "lv_name": "ceph_lv1",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "lv_size": "21470642176",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "name": "ceph_lv1",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "tags": {
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.cluster_name": "ceph",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.crush_device_class": "",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.encrypted": "0",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.objectstore": "bluestore",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.osd_id": "1",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.type": "block",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.vdo": "0",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.with_tpm": "0"
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             },
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "type": "block",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "vg_name": "ceph_vg1"
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:         }
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:     ],
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:     "2": [
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:         {
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "devices": [
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "/dev/loop5"
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             ],
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "lv_name": "ceph_lv2",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "lv_size": "21470642176",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "name": "ceph_lv2",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "tags": {
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.cluster_name": "ceph",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.crush_device_class": "",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.encrypted": "0",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.objectstore": "bluestore",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.osd_id": "2",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.type": "block",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.vdo": "0",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:                 "ceph.with_tpm": "0"
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             },
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "type": "block",
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:             "vg_name": "ceph_vg2"
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:         }
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]:     ]
Jan 23 18:46:35 compute-0 mystifying_chandrasekhar[158089]: }
Jan 23 18:46:35 compute-0 sudo[158223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubbfbspyunsmssokfjvnwwbobkaunmet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193995.2238941-59-166494128113356/AnsiballZ_systemd_service.py'
Jan 23 18:46:35 compute-0 sudo[158223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:35 compute-0 systemd[1]: libpod-95c767ca24eea5ac45b5953a01b994a865a580bfb431e42d3991f7a4d5e4be4a.scope: Deactivated successfully.
Jan 23 18:46:35 compute-0 conmon[158089]: conmon 95c767ca24eea5ac45b5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-95c767ca24eea5ac45b5953a01b994a865a580bfb431e42d3991f7a4d5e4be4a.scope/container/memory.events
Jan 23 18:46:35 compute-0 podman[158048]: 2026-01-23 18:46:35.528087001 +0000 UTC m=+0.446613495 container died 95c767ca24eea5ac45b5953a01b994a865a580bfb431e42d3991f7a4d5e4be4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 23 18:46:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-d17544232a61d71bb5624e3622f6bc8f119b7d9fdac010b0a2894ea967531b95-merged.mount: Deactivated successfully.
Jan 23 18:46:35 compute-0 podman[158048]: 2026-01-23 18:46:35.599694667 +0000 UTC m=+0.518221161 container remove 95c767ca24eea5ac45b5953a01b994a865a580bfb431e42d3991f7a4d5e4be4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:46:35 compute-0 systemd[1]: libpod-conmon-95c767ca24eea5ac45b5953a01b994a865a580bfb431e42d3991f7a4d5e4be4a.scope: Deactivated successfully.
Jan 23 18:46:35 compute-0 sudo[157865]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:35 compute-0 sudo[158237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:46:35 compute-0 sudo[158237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:46:35 compute-0 sudo[158237]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:35 compute-0 sudo[158262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:46:35 compute-0 sudo[158262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:46:35 compute-0 python3.9[158225]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:46:35 compute-0 sudo[158223]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:36 compute-0 podman[158330]: 2026-01-23 18:46:36.011324342 +0000 UTC m=+0.039971353 container create e7212a92194c358a9c6aaa837bf40a7a1d5bdf3977c54be7406b1b74896e30e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wozniak, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 18:46:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:46:36 compute-0 systemd[1]: Started libpod-conmon-e7212a92194c358a9c6aaa837bf40a7a1d5bdf3977c54be7406b1b74896e30e0.scope.
Jan 23 18:46:36 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:46:36 compute-0 podman[158330]: 2026-01-23 18:46:35.995053425 +0000 UTC m=+0.023700456 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:46:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 18:46:36 compute-0 podman[158330]: 2026-01-23 18:46:36.235824505 +0000 UTC m=+0.264471596 container init e7212a92194c358a9c6aaa837bf40a7a1d5bdf3977c54be7406b1b74896e30e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wozniak, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 18:46:36 compute-0 podman[158330]: 2026-01-23 18:46:36.247085637 +0000 UTC m=+0.275732648 container start e7212a92194c358a9c6aaa837bf40a7a1d5bdf3977c54be7406b1b74896e30e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 23 18:46:36 compute-0 charming_wozniak[158393]: 167 167
Jan 23 18:46:36 compute-0 systemd[1]: libpod-e7212a92194c358a9c6aaa837bf40a7a1d5bdf3977c54be7406b1b74896e30e0.scope: Deactivated successfully.
Jan 23 18:46:36 compute-0 sudo[158470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayzfzkmilwrxjfegaonpnjjgmhqwwfjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193995.9889295-59-100351703768518/AnsiballZ_systemd_service.py'
Jan 23 18:46:36 compute-0 sudo[158470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:36 compute-0 podman[158330]: 2026-01-23 18:46:36.27071573 +0000 UTC m=+0.299362751 container attach e7212a92194c358a9c6aaa837bf40a7a1d5bdf3977c54be7406b1b74896e30e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wozniak, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 23 18:46:36 compute-0 podman[158330]: 2026-01-23 18:46:36.271078849 +0000 UTC m=+0.299725870 container died e7212a92194c358a9c6aaa837bf40a7a1d5bdf3977c54be7406b1b74896e30e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wozniak, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 18:46:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-08a18996b5e82e0f62f3cc5c18718ae9db3d2fdd37eb260ee95e453f3deb5c71-merged.mount: Deactivated successfully.
Jan 23 18:46:36 compute-0 podman[158330]: 2026-01-23 18:46:36.400697251 +0000 UTC m=+0.429344292 container remove e7212a92194c358a9c6aaa837bf40a7a1d5bdf3977c54be7406b1b74896e30e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:46:36 compute-0 systemd[1]: libpod-conmon-e7212a92194c358a9c6aaa837bf40a7a1d5bdf3977c54be7406b1b74896e30e0.scope: Deactivated successfully.
Jan 23 18:46:36 compute-0 python3.9[158474]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:46:36 compute-0 podman[158495]: 2026-01-23 18:46:36.602633416 +0000 UTC m=+0.044015676 container create 17618bc9b043e50953b9fd5cac64558e974f3a3a379f022d3bac3c63d8e86607 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_shamir, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Jan 23 18:46:36 compute-0 sudo[158470]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:36 compute-0 systemd[1]: Started libpod-conmon-17618bc9b043e50953b9fd5cac64558e974f3a3a379f022d3bac3c63d8e86607.scope.
Jan 23 18:46:36 compute-0 podman[158495]: 2026-01-23 18:46:36.582807209 +0000 UTC m=+0.024189469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:46:36 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:46:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08751cf8a6dfe64dba8013d1c216da65ec7c38be1297cb52842714da2853a9c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:46:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08751cf8a6dfe64dba8013d1c216da65ec7c38be1297cb52842714da2853a9c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:46:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08751cf8a6dfe64dba8013d1c216da65ec7c38be1297cb52842714da2853a9c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:46:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08751cf8a6dfe64dba8013d1c216da65ec7c38be1297cb52842714da2853a9c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:46:36 compute-0 podman[158495]: 2026-01-23 18:46:36.71002404 +0000 UTC m=+0.151406340 container init 17618bc9b043e50953b9fd5cac64558e974f3a3a379f022d3bac3c63d8e86607 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_shamir, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 23 18:46:36 compute-0 podman[158495]: 2026-01-23 18:46:36.720503283 +0000 UTC m=+0.161885503 container start 17618bc9b043e50953b9fd5cac64558e974f3a3a379f022d3bac3c63d8e86607 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_shamir, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:46:36 compute-0 podman[158495]: 2026-01-23 18:46:36.72439425 +0000 UTC m=+0.165776470 container attach 17618bc9b043e50953b9fd5cac64558e974f3a3a379f022d3bac3c63d8e86607 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_shamir, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 18:46:36 compute-0 podman[158510]: 2026-01-23 18:46:36.748957296 +0000 UTC m=+0.113452367 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 23 18:46:37 compute-0 ceph-mon[75097]: pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 18:46:37 compute-0 sudo[158760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gndvsyxlsuhzplfwgnuztbcgddojhcqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193996.9587948-111-174209184415923/AnsiballZ_file.py'
Jan 23 18:46:37 compute-0 sudo[158760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:37 compute-0 lvm[158767]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:46:37 compute-0 lvm[158768]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:46:37 compute-0 lvm[158767]: VG ceph_vg0 finished
Jan 23 18:46:37 compute-0 lvm[158768]: VG ceph_vg1 finished
Jan 23 18:46:37 compute-0 lvm[158770]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:46:37 compute-0 lvm[158770]: VG ceph_vg2 finished
Jan 23 18:46:37 compute-0 epic_shamir[158523]: {}
Jan 23 18:46:37 compute-0 systemd[1]: libpod-17618bc9b043e50953b9fd5cac64558e974f3a3a379f022d3bac3c63d8e86607.scope: Deactivated successfully.
Jan 23 18:46:37 compute-0 podman[158495]: 2026-01-23 18:46:37.566827852 +0000 UTC m=+1.008210112 container died 17618bc9b043e50953b9fd5cac64558e974f3a3a379f022d3bac3c63d8e86607 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 23 18:46:37 compute-0 systemd[1]: libpod-17618bc9b043e50953b9fd5cac64558e974f3a3a379f022d3bac3c63d8e86607.scope: Consumed 1.386s CPU time.
Jan 23 18:46:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-08751cf8a6dfe64dba8013d1c216da65ec7c38be1297cb52842714da2853a9c7-merged.mount: Deactivated successfully.
Jan 23 18:46:37 compute-0 python3.9[158763]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:46:37 compute-0 sudo[158760]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:37 compute-0 podman[158495]: 2026-01-23 18:46:37.969020532 +0000 UTC m=+1.410402762 container remove 17618bc9b043e50953b9fd5cac64558e974f3a3a379f022d3bac3c63d8e86607 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_shamir, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 18:46:37 compute-0 systemd[1]: libpod-conmon-17618bc9b043e50953b9fd5cac64558e974f3a3a379f022d3bac3c63d8e86607.scope: Deactivated successfully.
Jan 23 18:46:38 compute-0 sudo[158262]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:38 compute-0 sudo[158933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehheoytnptssnokkuihljlygjqenowwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193997.7482467-111-86014594468172/AnsiballZ_file.py'
Jan 23 18:46:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:46:38 compute-0 sudo[158933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 18:46:38 compute-0 python3.9[158935]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:46:38 compute-0 sudo[158933]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:38 compute-0 sudo[159086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uutlrpxpmegjfwwnpvwdiqqjrudppign ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193998.4929092-111-139634123181392/AnsiballZ_file.py'
Jan 23 18:46:38 compute-0 sudo[159086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:38 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:46:38 compute-0 python3.9[159088]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:46:38 compute-0 sudo[159086]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:46:39 compute-0 sudo[159239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvaheodzcivxyhevkwwwcmrxkrsaustq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193999.080959-111-116450125003556/AnsiballZ_file.py'
Jan 23 18:46:39 compute-0 sudo[159239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:39 compute-0 python3.9[159241]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:46:39 compute-0 sudo[159239]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:39 compute-0 ceph-mon[75097]: pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 18:46:39 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:46:39 compute-0 sudo[159269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:46:39 compute-0 sudo[159269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:46:39 compute-0 sudo[159269]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:39 compute-0 sudo[159416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjcytwushrnnzgbozsijrpmhccfxdsar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769193999.6734812-111-239275202022484/AnsiballZ_file.py'
Jan 23 18:46:39 compute-0 sudo[159416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 18:46:40 compute-0 python3.9[159418]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:46:40 compute-0 sudo[159416]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:46:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:46:40 compute-0 sudo[159568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfqffgdkibaucniayctwzagisflxquqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194000.275446-111-89009347522953/AnsiballZ_file.py'
Jan 23 18:46:40 compute-0 sudo[159568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:46:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:46:40 compute-0 ceph-mon[75097]: pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 18:46:40 compute-0 python3.9[159570]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:46:40 compute-0 sudo[159568]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:46:41 compute-0 sudo[159720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usrsgdubwdkvdgrzzbywfzcuzrkshjbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194000.9026177-111-66148149677350/AnsiballZ_file.py'
Jan 23 18:46:41 compute-0 sudo[159720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:41 compute-0 python3.9[159722]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:46:41 compute-0 sudo[159720]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:42 compute-0 sudo[159872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvfnwkmdfmpxvizqmgbiqhvhsutexrbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194001.6450915-161-251401477451555/AnsiballZ_file.py'
Jan 23 18:46:42 compute-0 sudo[159872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 18:46:42 compute-0 python3.9[159874]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:46:42 compute-0 sudo[159872]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:42 compute-0 ceph-mon[75097]: pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 18:46:42 compute-0 sudo[160024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cawderndunkpvsfwexmbrygkvldbmauy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194002.395005-161-242340652453213/AnsiballZ_file.py'
Jan 23 18:46:42 compute-0 sudo[160024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:42 compute-0 python3.9[160026]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:46:42 compute-0 sudo[160024]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:43 compute-0 sudo[160176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhpvssxmacgslsutmrgiwtblgjdzjccp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194002.9676566-161-171344410662673/AnsiballZ_file.py'
Jan 23 18:46:43 compute-0 sudo[160176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:43 compute-0 python3.9[160178]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:46:43 compute-0 sudo[160176]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:43 compute-0 sudo[160328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzhzimtnuundfsxxvflanllfopkvpgwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194003.648869-161-28190596954575/AnsiballZ_file.py'
Jan 23 18:46:43 compute-0 sudo[160328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 18:46:44 compute-0 python3.9[160330]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:46:44 compute-0 sudo[160328]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:44 compute-0 sudo[160480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxnbgfrvsdppxijhcbalpxvvmfshgbal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194004.3276184-161-51582185283195/AnsiballZ_file.py'
Jan 23 18:46:44 compute-0 sudo[160480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:44 compute-0 python3.9[160482]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:46:44 compute-0 sudo[160480]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:45 compute-0 ceph-mon[75097]: pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 18:46:45 compute-0 sudo[160632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtpjpwkpegvsmmdmrjsbqacxmvfwubsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194004.9432619-161-18593988028863/AnsiballZ_file.py'
Jan 23 18:46:45 compute-0 sudo[160632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:45 compute-0 python3.9[160634]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:46:45 compute-0 sudo[160632]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:45 compute-0 sudo[160800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqssgywvnlzzdtdvwsrduvrbhifnsbyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194005.5950348-161-207410231127945/AnsiballZ_file.py'
Jan 23 18:46:45 compute-0 sudo[160800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:45 compute-0 podman[160758]: 2026-01-23 18:46:45.897864095 +0000 UTC m=+0.058107879 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 18:46:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:46:46 compute-0 python3.9[160805]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:46:46 compute-0 sudo[160800]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 23 18:46:46 compute-0 ceph-mon[75097]: pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 23 18:46:46 compute-0 sudo[160955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtycinmfxonjiyuhilenvxnfgxsisopj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194006.393825-212-39520123784887/AnsiballZ_command.py'
Jan 23 18:46:46 compute-0 sudo[160955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:46 compute-0 python3.9[160957]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:46:46 compute-0 sudo[160955]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:47 compute-0 python3.9[161109]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 18:46:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:48 compute-0 sudo[161259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raibqzuyxrlcgkjqyfhteofvfhpsaybu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194008.009745-230-166744310060901/AnsiballZ_systemd_service.py'
Jan 23 18:46:48 compute-0 sudo[161259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:48 compute-0 python3.9[161261]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 18:46:48 compute-0 systemd[1]: Reloading.
Jan 23 18:46:48 compute-0 systemd-rc-local-generator[161288]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:46:48 compute-0 systemd-sysv-generator[161291]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:46:49 compute-0 sudo[161259]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:49 compute-0 ceph-mon[75097]: pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:49 compute-0 sudo[161447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efbpwmtxlwwyroyagoddmtlnjlelitri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194009.3572145-238-174128255221052/AnsiballZ_command.py'
Jan 23 18:46:49 compute-0 sudo[161447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:49 compute-0 python3.9[161449]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:46:49 compute-0 sudo[161447]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:50 compute-0 sudo[161600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yetswmnycqzpxsheqkdkitqrhxyplljn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194009.9780664-238-65716568896915/AnsiballZ_command.py'
Jan 23 18:46:50 compute-0 sudo[161600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:50 compute-0 python3.9[161602]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:46:50 compute-0 sudo[161600]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:50 compute-0 sudo[161753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvnvfigajzwcbdmhbgwxvmnluvstifjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194010.5990465-238-149709449428395/AnsiballZ_command.py'
Jan 23 18:46:50 compute-0 sudo[161753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:46:51 compute-0 python3.9[161755]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:46:51 compute-0 sudo[161753]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:51 compute-0 ceph-mon[75097]: pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:51 compute-0 sshd-session[161420]: error: kex_exchange_identification: read: Connection reset by peer
Jan 23 18:46:51 compute-0 sshd-session[161420]: Connection reset by 80.212.64.207 port 33448
Jan 23 18:46:51 compute-0 sudo[161906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qswfbjomqhhfirknudtmuspezikfzubm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194011.181618-238-215810973966807/AnsiballZ_command.py'
Jan 23 18:46:51 compute-0 sudo[161906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:51 compute-0 python3.9[161908]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:46:51 compute-0 sudo[161906]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:52 compute-0 sudo[162060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggfzdxleofwbqvoihfxbhqqmmvqakyga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194011.7849371-238-90905747504045/AnsiballZ_command.py'
Jan 23 18:46:52 compute-0 sudo[162060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:52 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 18:46:52 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 18:46:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:52 compute-0 python3.9[162062]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:46:52 compute-0 sudo[162060]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:52 compute-0 sudo[162214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sehcegykluotmdzxfqppyxftsvrqgsjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194012.3871474-238-177924085880191/AnsiballZ_command.py'
Jan 23 18:46:52 compute-0 sudo[162214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:52 compute-0 python3.9[162216]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:46:52 compute-0 sudo[162214]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:53 compute-0 sudo[162367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhsacaiiteastnljoqaymexlkuwysplv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194013.03097-238-94732990038112/AnsiballZ_command.py'
Jan 23 18:46:53 compute-0 sudo[162367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:53 compute-0 python3.9[162369]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:46:53 compute-0 ceph-mon[75097]: pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:53 compute-0 sudo[162367]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:54 compute-0 sudo[162520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msilpthguqfsbshbxzdxqwqbxcfdiinr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194013.9077532-292-168847146738065/AnsiballZ_getent.py'
Jan 23 18:46:54 compute-0 sudo[162520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:54 compute-0 python3.9[162522]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 23 18:46:54 compute-0 ceph-mon[75097]: pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:54 compute-0 sudo[162520]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:55 compute-0 sudo[162673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkuujnjjywdalpxnigskfalygrowxzjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194014.7733414-300-51822865385799/AnsiballZ_group.py'
Jan 23 18:46:55 compute-0 sudo[162673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:55 compute-0 python3.9[162675]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 18:46:55 compute-0 groupadd[162676]: group added to /etc/group: name=libvirt, GID=42473
Jan 23 18:46:55 compute-0 groupadd[162676]: group added to /etc/gshadow: name=libvirt
Jan 23 18:46:55 compute-0 groupadd[162676]: new group: name=libvirt, GID=42473
Jan 23 18:46:55 compute-0 sudo[162673]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:46:56 compute-0 sudo[162831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrcnltmebcrntugnvjhcfnuterpgropi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194015.626385-308-132084948596824/AnsiballZ_user.py'
Jan 23 18:46:56 compute-0 sudo[162831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:56 compute-0 python3.9[162833]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 23 18:46:56 compute-0 useradd[162835]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 23 18:46:56 compute-0 sudo[162831]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:57 compute-0 sudo[162992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyzuueoajwmuacltfkqpjljtbtpnrvvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194016.7419362-319-219395925304490/AnsiballZ_setup.py'
Jan 23 18:46:57 compute-0 sudo[162992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:57 compute-0 ceph-mon[75097]: pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:57 compute-0 python3.9[162994]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:46:57 compute-0 sudo[162992]: pam_unix(sudo:session): session closed for user root
Jan 23 18:46:58 compute-0 sudo[163076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yleooyqorpkfbgmooovfczdhklvjktqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194016.7419362-319-219395925304490/AnsiballZ_dnf.py'
Jan 23 18:46:58 compute-0 sudo[163076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:46:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:46:58 compute-0 python3.9[163078]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:46:59 compute-0 ceph-mon[75097]: pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:47:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:47:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:47:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:47:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:47:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:47:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:00 compute-0 ceph-mon[75097]: pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:47:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:03 compute-0 ceph-mon[75097]: pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:05 compute-0 ceph-mon[75097]: pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:47:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:06 compute-0 ceph-mon[75097]: pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:07 compute-0 podman[163090]: 2026-01-23 18:47:07.09068922 +0000 UTC m=+0.144656160 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 23 18:47:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:08 compute-0 ceph-mon[75097]: pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:47:11 compute-0 ceph-mon[75097]: pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:12 compute-0 ceph-mon[75097]: pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:47:13.562 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:47:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:47:13.563 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:47:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:47:13.563 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:47:13 compute-0 sshd-session[161909]: Invalid user a from 80.212.64.207 port 36142
Jan 23 18:47:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:15 compute-0 ceph-mon[75097]: pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:47:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:16 compute-0 ceph-mon[75097]: pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:16 compute-0 podman[163117]: 2026-01-23 18:47:16.995342288 +0000 UTC m=+0.064993262 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 18:47:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:18 compute-0 ceph-mon[75097]: pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:19 compute-0 sshd-session[161909]: Connection closed by invalid user a 80.212.64.207 port 36142 [preauth]
Jan 23 18:47:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:20 compute-0 ceph-mon[75097]: pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:47:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:23 compute-0 ceph-mon[75097]: pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:25 compute-0 ceph-mon[75097]: pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:47:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:26 compute-0 ceph-mon[75097]: pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:47:28
Jan 23 18:47:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:47:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:47:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'volumes', '.mgr']
Jan 23 18:47:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:47:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:47:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:47:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:47:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:47:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:47:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:47:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:30 compute-0 ceph-mon[75097]: pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:47:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:47:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:47:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:47:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:47:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:47:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:47:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:47:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:47:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:47:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:47:31 compute-0 ceph-mon[75097]: pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:32 compute-0 ceph-mon[75097]: pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:34 compute-0 ceph-mon[75097]: pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:47:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:37 compute-0 ceph-mon[75097]: pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:38 compute-0 podman[163315]: 2026-01-23 18:47:38.054936183 +0000 UTC m=+0.134857904 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 18:47:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:39 compute-0 ceph-mon[75097]: pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:39 compute-0 sudo[163342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:47:39 compute-0 sudo[163342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:47:39 compute-0 sudo[163342]: pam_unix(sudo:session): session closed for user root
Jan 23 18:47:39 compute-0 sudo[163367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 23 18:47:39 compute-0 sudo[163367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:47:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:47:40 compute-0 podman[163437]: 2026-01-23 18:47:40.376028167 +0000 UTC m=+0.079768722 container exec 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 23 18:47:40 compute-0 podman[163437]: 2026-01-23 18:47:40.484794554 +0000 UTC m=+0.188535129 container exec_died 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 23 18:47:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:47:41 compute-0 sudo[163367]: pam_unix(sudo:session): session closed for user root
Jan 23 18:47:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:47:41 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:47:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:47:41 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:47:41 compute-0 sudo[163623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:47:41 compute-0 sudo[163623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:47:41 compute-0 sudo[163623]: pam_unix(sudo:session): session closed for user root
Jan 23 18:47:41 compute-0 ceph-mon[75097]: pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:41 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:47:41 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:47:41 compute-0 sudo[163648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:47:41 compute-0 sudo[163648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:47:41 compute-0 sudo[163648]: pam_unix(sudo:session): session closed for user root
Jan 23 18:47:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:47:42 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:47:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:47:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:47:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:47:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:47:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:47:42 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:47:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:47:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:47:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:47:42 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:47:42 compute-0 sudo[163705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:47:42 compute-0 sudo[163705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:47:42 compute-0 sudo[163705]: pam_unix(sudo:session): session closed for user root
Jan 23 18:47:42 compute-0 sudo[163730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:47:42 compute-0 sudo[163730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:47:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:47:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:47:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:47:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:47:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:47:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:47:42 compute-0 podman[163767]: 2026-01-23 18:47:42.471933842 +0000 UTC m=+0.072304445 container create 8b758ca7e5375a1e448596c97959453a5574b8bf03a88c0206180ceab191ccda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 18:47:42 compute-0 systemd[1]: Started libpod-conmon-8b758ca7e5375a1e448596c97959453a5574b8bf03a88c0206180ceab191ccda.scope.
Jan 23 18:47:42 compute-0 podman[163767]: 2026-01-23 18:47:42.440842661 +0000 UTC m=+0.041213304 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:47:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:47:42 compute-0 podman[163767]: 2026-01-23 18:47:42.563310034 +0000 UTC m=+0.163680667 container init 8b758ca7e5375a1e448596c97959453a5574b8bf03a88c0206180ceab191ccda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bhabha, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Jan 23 18:47:42 compute-0 podman[163767]: 2026-01-23 18:47:42.57235274 +0000 UTC m=+0.172723333 container start 8b758ca7e5375a1e448596c97959453a5574b8bf03a88c0206180ceab191ccda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bhabha, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 23 18:47:42 compute-0 systemd[1]: libpod-8b758ca7e5375a1e448596c97959453a5574b8bf03a88c0206180ceab191ccda.scope: Deactivated successfully.
Jan 23 18:47:42 compute-0 podman[163767]: 2026-01-23 18:47:42.5834882 +0000 UTC m=+0.183858833 container attach 8b758ca7e5375a1e448596c97959453a5574b8bf03a88c0206180ceab191ccda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 18:47:42 compute-0 cool_bhabha[163784]: 167 167
Jan 23 18:47:42 compute-0 podman[163767]: 2026-01-23 18:47:42.585745327 +0000 UTC m=+0.186115920 container died 8b758ca7e5375a1e448596c97959453a5574b8bf03a88c0206180ceab191ccda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bhabha, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:47:42 compute-0 conmon[163784]: conmon 8b758ca7e5375a1e4485 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8b758ca7e5375a1e448596c97959453a5574b8bf03a88c0206180ceab191ccda.scope/container/memory.events
Jan 23 18:47:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0143a8d26333654d0680fef531b8467fd5bc13f397b37252329e443b91b1786b-merged.mount: Deactivated successfully.
Jan 23 18:47:42 compute-0 podman[163767]: 2026-01-23 18:47:42.649823824 +0000 UTC m=+0.250194457 container remove 8b758ca7e5375a1e448596c97959453a5574b8bf03a88c0206180ceab191ccda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bhabha, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:47:42 compute-0 systemd[1]: libpod-conmon-8b758ca7e5375a1e448596c97959453a5574b8bf03a88c0206180ceab191ccda.scope: Deactivated successfully.
Jan 23 18:47:42 compute-0 podman[163808]: 2026-01-23 18:47:42.886383498 +0000 UTC m=+0.067024312 container create caa264d5dcac4a3a48c1f180c72dd9334ded16dc7218acb354ca9cc670e5a0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_germain, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:47:42 compute-0 podman[163808]: 2026-01-23 18:47:42.85380227 +0000 UTC m=+0.034443084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:47:42 compute-0 systemd[1]: Started libpod-conmon-caa264d5dcac4a3a48c1f180c72dd9334ded16dc7218acb354ca9cc670e5a0d3.scope.
Jan 23 18:47:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf839929bc5378a00c89c6a491f8cce94993c5bc05d82f63a24052236c95eca7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf839929bc5378a00c89c6a491f8cce94993c5bc05d82f63a24052236c95eca7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf839929bc5378a00c89c6a491f8cce94993c5bc05d82f63a24052236c95eca7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf839929bc5378a00c89c6a491f8cce94993c5bc05d82f63a24052236c95eca7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf839929bc5378a00c89c6a491f8cce94993c5bc05d82f63a24052236c95eca7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:47:43 compute-0 podman[163808]: 2026-01-23 18:47:43.006760138 +0000 UTC m=+0.187400922 container init caa264d5dcac4a3a48c1f180c72dd9334ded16dc7218acb354ca9cc670e5a0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:47:43 compute-0 podman[163808]: 2026-01-23 18:47:43.022862391 +0000 UTC m=+0.203503195 container start caa264d5dcac4a3a48c1f180c72dd9334ded16dc7218acb354ca9cc670e5a0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_germain, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:47:43 compute-0 podman[163808]: 2026-01-23 18:47:43.028766809 +0000 UTC m=+0.209407593 container attach caa264d5dcac4a3a48c1f180c72dd9334ded16dc7218acb354ca9cc670e5a0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_germain, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 18:47:43 compute-0 ceph-mon[75097]: pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:43 compute-0 sad_germain[163824]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:47:43 compute-0 sad_germain[163824]: --> All data devices are unavailable
Jan 23 18:47:43 compute-0 systemd[1]: libpod-caa264d5dcac4a3a48c1f180c72dd9334ded16dc7218acb354ca9cc670e5a0d3.scope: Deactivated successfully.
Jan 23 18:47:43 compute-0 podman[163808]: 2026-01-23 18:47:43.566102588 +0000 UTC m=+0.746743352 container died caa264d5dcac4a3a48c1f180c72dd9334ded16dc7218acb354ca9cc670e5a0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_germain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:47:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf839929bc5378a00c89c6a491f8cce94993c5bc05d82f63a24052236c95eca7-merged.mount: Deactivated successfully.
Jan 23 18:47:43 compute-0 podman[163808]: 2026-01-23 18:47:43.612542053 +0000 UTC m=+0.793182817 container remove caa264d5dcac4a3a48c1f180c72dd9334ded16dc7218acb354ca9cc670e5a0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 23 18:47:43 compute-0 systemd[1]: libpod-conmon-caa264d5dcac4a3a48c1f180c72dd9334ded16dc7218acb354ca9cc670e5a0d3.scope: Deactivated successfully.
Jan 23 18:47:43 compute-0 sudo[163730]: pam_unix(sudo:session): session closed for user root
Jan 23 18:47:43 compute-0 sudo[163856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:47:43 compute-0 sudo[163856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:47:43 compute-0 sudo[163856]: pam_unix(sudo:session): session closed for user root
Jan 23 18:47:43 compute-0 sudo[163881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:47:43 compute-0 sudo[163881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:47:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:44 compute-0 podman[163917]: 2026-01-23 18:47:44.16133727 +0000 UTC m=+0.071665689 container create 7cb31fdacaef6893dda6850d65274822fceafbb2419325b432ceebd10fbb79bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:47:44 compute-0 podman[163917]: 2026-01-23 18:47:44.114885745 +0000 UTC m=+0.025214174 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:47:44 compute-0 systemd[1]: Started libpod-conmon-7cb31fdacaef6893dda6850d65274822fceafbb2419325b432ceebd10fbb79bc.scope.
Jan 23 18:47:44 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:47:44 compute-0 podman[163917]: 2026-01-23 18:47:44.281084864 +0000 UTC m=+0.191413243 container init 7cb31fdacaef6893dda6850d65274822fceafbb2419325b432ceebd10fbb79bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 18:47:44 compute-0 podman[163917]: 2026-01-23 18:47:44.2889269 +0000 UTC m=+0.199255279 container start 7cb31fdacaef6893dda6850d65274822fceafbb2419325b432ceebd10fbb79bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swanson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:47:44 compute-0 podman[163917]: 2026-01-23 18:47:44.293644769 +0000 UTC m=+0.203973148 container attach 7cb31fdacaef6893dda6850d65274822fceafbb2419325b432ceebd10fbb79bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swanson, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 23 18:47:44 compute-0 funny_swanson[163933]: 167 167
Jan 23 18:47:44 compute-0 systemd[1]: libpod-7cb31fdacaef6893dda6850d65274822fceafbb2419325b432ceebd10fbb79bc.scope: Deactivated successfully.
Jan 23 18:47:44 compute-0 podman[163917]: 2026-01-23 18:47:44.297474084 +0000 UTC m=+0.207802503 container died 7cb31fdacaef6893dda6850d65274822fceafbb2419325b432ceebd10fbb79bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Jan 23 18:47:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-8446df1e247c2f3e1e8561774eb3828222f2bc699b87a13207072655f4cb52c2-merged.mount: Deactivated successfully.
Jan 23 18:47:44 compute-0 podman[163917]: 2026-01-23 18:47:44.35389077 +0000 UTC m=+0.264219159 container remove 7cb31fdacaef6893dda6850d65274822fceafbb2419325b432ceebd10fbb79bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swanson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 23 18:47:44 compute-0 systemd[1]: libpod-conmon-7cb31fdacaef6893dda6850d65274822fceafbb2419325b432ceebd10fbb79bc.scope: Deactivated successfully.
Jan 23 18:47:44 compute-0 podman[163956]: 2026-01-23 18:47:44.551946558 +0000 UTC m=+0.046628270 container create f276b5740c0a6b3251f9427b33c5008f2834cb574af720fd8ef0284e341b812d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_sammet, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 18:47:44 compute-0 systemd[1]: Started libpod-conmon-f276b5740c0a6b3251f9427b33c5008f2834cb574af720fd8ef0284e341b812d.scope.
Jan 23 18:47:44 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:47:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7095de0f2e0fb98d2aa925156b090871ba2b5b77289854f6a1830552cc9807ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:47:44 compute-0 podman[163956]: 2026-01-23 18:47:44.53130633 +0000 UTC m=+0.025988032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:47:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7095de0f2e0fb98d2aa925156b090871ba2b5b77289854f6a1830552cc9807ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:47:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7095de0f2e0fb98d2aa925156b090871ba2b5b77289854f6a1830552cc9807ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:47:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7095de0f2e0fb98d2aa925156b090871ba2b5b77289854f6a1830552cc9807ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:47:44 compute-0 podman[163956]: 2026-01-23 18:47:44.642696515 +0000 UTC m=+0.137378247 container init f276b5740c0a6b3251f9427b33c5008f2834cb574af720fd8ef0284e341b812d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:47:44 compute-0 podman[163956]: 2026-01-23 18:47:44.65087374 +0000 UTC m=+0.145555462 container start f276b5740c0a6b3251f9427b33c5008f2834cb574af720fd8ef0284e341b812d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 23 18:47:44 compute-0 podman[163956]: 2026-01-23 18:47:44.655313921 +0000 UTC m=+0.149995603 container attach f276b5740c0a6b3251f9427b33c5008f2834cb574af720fd8ef0284e341b812d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_sammet, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]: {
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:     "0": [
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:         {
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "devices": [
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "/dev/loop3"
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             ],
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "lv_name": "ceph_lv0",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "lv_size": "21470642176",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "name": "ceph_lv0",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "tags": {
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.cluster_name": "ceph",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.crush_device_class": "",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.encrypted": "0",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.objectstore": "bluestore",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.osd_id": "0",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.type": "block",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.vdo": "0",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.with_tpm": "0"
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             },
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "type": "block",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "vg_name": "ceph_vg0"
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:         }
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:     ],
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:     "1": [
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:         {
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "devices": [
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "/dev/loop4"
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             ],
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "lv_name": "ceph_lv1",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "lv_size": "21470642176",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "name": "ceph_lv1",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "tags": {
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.cluster_name": "ceph",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.crush_device_class": "",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.encrypted": "0",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.objectstore": "bluestore",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.osd_id": "1",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.type": "block",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.vdo": "0",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.with_tpm": "0"
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             },
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "type": "block",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "vg_name": "ceph_vg1"
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:         }
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:     ],
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:     "2": [
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:         {
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "devices": [
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "/dev/loop5"
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             ],
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "lv_name": "ceph_lv2",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "lv_size": "21470642176",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "name": "ceph_lv2",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "tags": {
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.cluster_name": "ceph",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.crush_device_class": "",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.encrypted": "0",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.objectstore": "bluestore",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.osd_id": "2",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.type": "block",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.vdo": "0",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:                 "ceph.with_tpm": "0"
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             },
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "type": "block",
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:             "vg_name": "ceph_vg2"
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:         }
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]:     ]
Jan 23 18:47:44 compute-0 upbeat_sammet[163972]: }
Jan 23 18:47:44 compute-0 systemd[1]: libpod-f276b5740c0a6b3251f9427b33c5008f2834cb574af720fd8ef0284e341b812d.scope: Deactivated successfully.
Jan 23 18:47:44 compute-0 podman[163956]: 2026-01-23 18:47:44.963542603 +0000 UTC m=+0.458224285 container died f276b5740c0a6b3251f9427b33c5008f2834cb574af720fd8ef0284e341b812d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_sammet, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:47:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-7095de0f2e0fb98d2aa925156b090871ba2b5b77289854f6a1830552cc9807ab-merged.mount: Deactivated successfully.
Jan 23 18:47:45 compute-0 podman[163956]: 2026-01-23 18:47:45.014744348 +0000 UTC m=+0.509426030 container remove f276b5740c0a6b3251f9427b33c5008f2834cb574af720fd8ef0284e341b812d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_sammet, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:47:45 compute-0 systemd[1]: libpod-conmon-f276b5740c0a6b3251f9427b33c5008f2834cb574af720fd8ef0284e341b812d.scope: Deactivated successfully.
Jan 23 18:47:45 compute-0 sudo[163881]: pam_unix(sudo:session): session closed for user root
Jan 23 18:47:45 compute-0 sudo[163993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:47:45 compute-0 sudo[163993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:47:45 compute-0 sudo[163993]: pam_unix(sudo:session): session closed for user root
Jan 23 18:47:45 compute-0 sudo[164018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:47:45 compute-0 sudo[164018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:47:45 compute-0 ceph-mon[75097]: pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:45 compute-0 podman[164053]: 2026-01-23 18:47:45.501691022 +0000 UTC m=+0.049301868 container create cf4c34e7ea17e7ad51bbff098eb1c93cbc8fedfbc8196990261bd68991fe1018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_yalow, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 18:47:45 compute-0 systemd[1]: Started libpod-conmon-cf4c34e7ea17e7ad51bbff098eb1c93cbc8fedfbc8196990261bd68991fe1018.scope.
Jan 23 18:47:45 compute-0 podman[164053]: 2026-01-23 18:47:45.480587012 +0000 UTC m=+0.028197848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:47:45 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:47:45 compute-0 podman[164053]: 2026-01-23 18:47:45.593413543 +0000 UTC m=+0.141024399 container init cf4c34e7ea17e7ad51bbff098eb1c93cbc8fedfbc8196990261bd68991fe1018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:47:45 compute-0 podman[164053]: 2026-01-23 18:47:45.601697871 +0000 UTC m=+0.149308707 container start cf4c34e7ea17e7ad51bbff098eb1c93cbc8fedfbc8196990261bd68991fe1018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:47:45 compute-0 podman[164053]: 2026-01-23 18:47:45.60563924 +0000 UTC m=+0.153250106 container attach cf4c34e7ea17e7ad51bbff098eb1c93cbc8fedfbc8196990261bd68991fe1018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_yalow, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 18:47:45 compute-0 heuristic_yalow[164069]: 167 167
Jan 23 18:47:45 compute-0 systemd[1]: libpod-cf4c34e7ea17e7ad51bbff098eb1c93cbc8fedfbc8196990261bd68991fe1018.scope: Deactivated successfully.
Jan 23 18:47:45 compute-0 podman[164053]: 2026-01-23 18:47:45.609082666 +0000 UTC m=+0.156693482 container died cf4c34e7ea17e7ad51bbff098eb1c93cbc8fedfbc8196990261bd68991fe1018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_yalow, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 23 18:47:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb537e8318e346cf8d93f7e0c92c2bc4cc8d832c0c218a2b0a3331cad59db857-merged.mount: Deactivated successfully.
Jan 23 18:47:45 compute-0 podman[164053]: 2026-01-23 18:47:45.655208333 +0000 UTC m=+0.202819159 container remove cf4c34e7ea17e7ad51bbff098eb1c93cbc8fedfbc8196990261bd68991fe1018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:47:45 compute-0 systemd[1]: libpod-conmon-cf4c34e7ea17e7ad51bbff098eb1c93cbc8fedfbc8196990261bd68991fe1018.scope: Deactivated successfully.
Jan 23 18:47:45 compute-0 podman[164093]: 2026-01-23 18:47:45.897822889 +0000 UTC m=+0.075550757 container create a383ccdd7f214566ea423b5c4e0cbae95d113c23bd0182c25248738b29756501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_booth, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 18:47:45 compute-0 systemd[1]: Started libpod-conmon-a383ccdd7f214566ea423b5c4e0cbae95d113c23bd0182c25248738b29756501.scope.
Jan 23 18:47:45 compute-0 podman[164093]: 2026-01-23 18:47:45.865299773 +0000 UTC m=+0.043027721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:47:45 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb78d1f620a63a70ea372c76f55d4de09912276feae709d9de20f8213db2ffa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb78d1f620a63a70ea372c76f55d4de09912276feae709d9de20f8213db2ffa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb78d1f620a63a70ea372c76f55d4de09912276feae709d9de20f8213db2ffa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb78d1f620a63a70ea372c76f55d4de09912276feae709d9de20f8213db2ffa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:47:45 compute-0 podman[164093]: 2026-01-23 18:47:45.983029716 +0000 UTC m=+0.160757594 container init a383ccdd7f214566ea423b5c4e0cbae95d113c23bd0182c25248738b29756501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_booth, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:47:45 compute-0 podman[164093]: 2026-01-23 18:47:45.99192965 +0000 UTC m=+0.169657508 container start a383ccdd7f214566ea423b5c4e0cbae95d113c23bd0182c25248738b29756501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_booth, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 23 18:47:46 compute-0 podman[164093]: 2026-01-23 18:47:46.000617488 +0000 UTC m=+0.178345356 container attach a383ccdd7f214566ea423b5c4e0cbae95d113c23bd0182c25248738b29756501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_booth, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:47:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:47:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:46 compute-0 lvm[164190]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:47:46 compute-0 lvm[164189]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:47:46 compute-0 lvm[164189]: VG ceph_vg0 finished
Jan 23 18:47:46 compute-0 lvm[164190]: VG ceph_vg1 finished
Jan 23 18:47:46 compute-0 lvm[164192]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:47:46 compute-0 lvm[164192]: VG ceph_vg2 finished
Jan 23 18:47:46 compute-0 great_booth[164109]: {}
Jan 23 18:47:46 compute-0 systemd[1]: libpod-a383ccdd7f214566ea423b5c4e0cbae95d113c23bd0182c25248738b29756501.scope: Deactivated successfully.
Jan 23 18:47:46 compute-0 systemd[1]: libpod-a383ccdd7f214566ea423b5c4e0cbae95d113c23bd0182c25248738b29756501.scope: Consumed 1.314s CPU time.
Jan 23 18:47:46 compute-0 podman[164093]: 2026-01-23 18:47:46.846109687 +0000 UTC m=+1.023837555 container died a383ccdd7f214566ea423b5c4e0cbae95d113c23bd0182c25248738b29756501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_booth, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:47:47 compute-0 ceph-mon[75097]: pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cb78d1f620a63a70ea372c76f55d4de09912276feae709d9de20f8213db2ffa-merged.mount: Deactivated successfully.
Jan 23 18:47:47 compute-0 podman[164093]: 2026-01-23 18:47:47.575647117 +0000 UTC m=+1.753374975 container remove a383ccdd7f214566ea423b5c4e0cbae95d113c23bd0182c25248738b29756501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 23 18:47:47 compute-0 systemd[1]: libpod-conmon-a383ccdd7f214566ea423b5c4e0cbae95d113c23bd0182c25248738b29756501.scope: Deactivated successfully.
Jan 23 18:47:47 compute-0 sudo[164018]: pam_unix(sudo:session): session closed for user root
Jan 23 18:47:47 compute-0 podman[164211]: 2026-01-23 18:47:47.627855846 +0000 UTC m=+0.068568291 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 18:47:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:47:47 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:47:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:47:47 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:47:47 compute-0 sudo[164230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:47:47 compute-0 sudo[164230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:47:47 compute-0 sudo[164230]: pam_unix(sudo:session): session closed for user root
Jan 23 18:47:47 compute-0 kernel: SELinux:  Converting 2775 SID table entries...
Jan 23 18:47:47 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 18:47:47 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 23 18:47:47 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 18:47:47 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 23 18:47:47 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 18:47:47 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 18:47:47 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 18:47:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:47:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:47:49 compute-0 ceph-mon[75097]: pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:47:51 compute-0 ceph-mon[75097]: pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:53 compute-0 ceph-mon[75097]: pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:55 compute-0 ceph-mon[75097]: pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:47:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:57 compute-0 ceph-mon[75097]: pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:57 compute-0 kernel: SELinux:  Converting 2775 SID table entries...
Jan 23 18:47:57 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 18:47:57 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 23 18:47:57 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 18:47:57 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 23 18:47:57 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 18:47:57 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 18:47:57 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 18:47:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:47:59 compute-0 ceph-mon[75097]: pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:48:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:48:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:48:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:48:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:48:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:48:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:48:01 compute-0 ceph-mon[75097]: pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:02 compute-0 ceph-mon[75097]: pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:04 compute-0 ceph-mon[75097]: pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:48:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:07 compute-0 ceph-mon[75097]: pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:08 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 23 18:48:09 compute-0 podman[164266]: 2026-01-23 18:48:09.028484805 +0000 UTC m=+0.100128633 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 23 18:48:09 compute-0 ceph-mon[75097]: pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:48:11 compute-0 ceph-mon[75097]: pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:13 compute-0 ceph-mon[75097]: pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:48:13.564 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:48:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:48:13.565 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:48:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:48:13.565 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:48:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:14 compute-0 ceph-mon[75097]: pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:48:17 compute-0 ceph-mon[75097]: pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:17 compute-0 podman[167690]: 2026-01-23 18:48:17.985826937 +0000 UTC m=+0.065378902 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 23 18:48:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:19 compute-0 ceph-mon[75097]: pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:21 compute-0 ceph-mon[75097]: pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:48:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:23 compute-0 ceph-mon[75097]: pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:25 compute-0 ceph-mon[75097]: pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:48:27 compute-0 ceph-mon[75097]: pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:28 compute-0 ceph-mon[75097]: pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:48:28
Jan 23 18:48:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:48:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:48:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'images', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.meta', '.mgr', '.rgw.root', 'vms', 'cephfs.cephfs.meta']
Jan 23 18:48:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:48:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:48:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:48:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:48:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:48:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:48:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:48:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:48:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:48:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:48:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:48:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:48:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:48:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:48:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:48:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:48:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:48:31 compute-0 ceph-mon[75097]: pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:48:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:33 compute-0 ceph-mon[75097]: pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:34 compute-0 ceph-mon[75097]: pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:36 compute-0 ceph-mon[75097]: pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:48:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:39 compute-0 ceph-mon[75097]: pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:40 compute-0 podman[180808]: 2026-01-23 18:48:40.058851171 +0000 UTC m=+0.142571821 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:48:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:48:41 compute-0 ceph-mon[75097]: pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:48:42 compute-0 ceph-mon[75097]: pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:46 compute-0 ceph-mon[75097]: pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:48:47 compute-0 sudo[181192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:48:47 compute-0 sudo[181192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:48:47 compute-0 sudo[181192]: pam_unix(sudo:session): session closed for user root
Jan 23 18:48:47 compute-0 sudo[181217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:48:47 compute-0 sudo[181217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:48:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:48 compute-0 sudo[181217]: pam_unix(sudo:session): session closed for user root
Jan 23 18:48:48 compute-0 podman[181276]: 2026-01-23 18:48:48.664552977 +0000 UTC m=+0.064822017 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 23 18:48:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:48:48 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:48:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:48:48 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:48:48 compute-0 ceph-mon[75097]: pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:48:48 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:48:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:48:48 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:48:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:48:48 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:48:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:48:48 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:48:48 compute-0 sudo[181295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:48:48 compute-0 sudo[181295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:48:48 compute-0 sudo[181295]: pam_unix(sudo:session): session closed for user root
Jan 23 18:48:48 compute-0 sudo[181320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:48:48 compute-0 sudo[181320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:48:49 compute-0 podman[181362]: 2026-01-23 18:48:49.154939288 +0000 UTC m=+0.046177897 container create 42176d748ff2bf3c4e57fbd56b1dea6f706cc747cd2c8986de3b0c84d09d2d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_shaw, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:48:49 compute-0 podman[181362]: 2026-01-23 18:48:49.134601555 +0000 UTC m=+0.025840194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:48:49 compute-0 systemd[1]: Started libpod-conmon-42176d748ff2bf3c4e57fbd56b1dea6f706cc747cd2c8986de3b0c84d09d2d30.scope.
Jan 23 18:48:49 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:48:49 compute-0 podman[181362]: 2026-01-23 18:48:49.294357388 +0000 UTC m=+0.185596017 container init 42176d748ff2bf3c4e57fbd56b1dea6f706cc747cd2c8986de3b0c84d09d2d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_shaw, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 23 18:48:49 compute-0 podman[181362]: 2026-01-23 18:48:49.302682018 +0000 UTC m=+0.193920627 container start 42176d748ff2bf3c4e57fbd56b1dea6f706cc747cd2c8986de3b0c84d09d2d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 18:48:49 compute-0 sleepy_shaw[181379]: 167 167
Jan 23 18:48:49 compute-0 systemd[1]: libpod-42176d748ff2bf3c4e57fbd56b1dea6f706cc747cd2c8986de3b0c84d09d2d30.scope: Deactivated successfully.
Jan 23 18:48:49 compute-0 podman[181362]: 2026-01-23 18:48:49.309659265 +0000 UTC m=+0.200897894 container attach 42176d748ff2bf3c4e57fbd56b1dea6f706cc747cd2c8986de3b0c84d09d2d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_shaw, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:48:49 compute-0 podman[181362]: 2026-01-23 18:48:49.310066285 +0000 UTC m=+0.201304894 container died 42176d748ff2bf3c4e57fbd56b1dea6f706cc747cd2c8986de3b0c84d09d2d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_shaw, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:48:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fd6f624ea200cfe10b08691a4c37a3effe5fa67b0bff36dcdf77c2fccc394db-merged.mount: Deactivated successfully.
Jan 23 18:48:49 compute-0 podman[181362]: 2026-01-23 18:48:49.357169094 +0000 UTC m=+0.248407703 container remove 42176d748ff2bf3c4e57fbd56b1dea6f706cc747cd2c8986de3b0c84d09d2d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_shaw, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:48:49 compute-0 systemd[1]: libpod-conmon-42176d748ff2bf3c4e57fbd56b1dea6f706cc747cd2c8986de3b0c84d09d2d30.scope: Deactivated successfully.
Jan 23 18:48:49 compute-0 podman[181404]: 2026-01-23 18:48:49.523302109 +0000 UTC m=+0.046756652 container create bb8a71d1b6a089c1dca83f302255b9be718777aa530c173a8a87f90c0d27fbda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 18:48:49 compute-0 systemd[1]: Started libpod-conmon-bb8a71d1b6a089c1dca83f302255b9be718777aa530c173a8a87f90c0d27fbda.scope.
Jan 23 18:48:49 compute-0 podman[181404]: 2026-01-23 18:48:49.503306603 +0000 UTC m=+0.026761166 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:48:49 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8a06fbf5a1797b70b7eeb218ff5fe82641447544caf6b2c8a9ea0426033cd8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8a06fbf5a1797b70b7eeb218ff5fe82641447544caf6b2c8a9ea0426033cd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8a06fbf5a1797b70b7eeb218ff5fe82641447544caf6b2c8a9ea0426033cd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8a06fbf5a1797b70b7eeb218ff5fe82641447544caf6b2c8a9ea0426033cd8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8a06fbf5a1797b70b7eeb218ff5fe82641447544caf6b2c8a9ea0426033cd8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:48:49 compute-0 podman[181404]: 2026-01-23 18:48:49.665372365 +0000 UTC m=+0.188826908 container init bb8a71d1b6a089c1dca83f302255b9be718777aa530c173a8a87f90c0d27fbda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_cartwright, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 23 18:48:49 compute-0 podman[181404]: 2026-01-23 18:48:49.671406547 +0000 UTC m=+0.194861090 container start bb8a71d1b6a089c1dca83f302255b9be718777aa530c173a8a87f90c0d27fbda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 18:48:49 compute-0 podman[181404]: 2026-01-23 18:48:49.677249875 +0000 UTC m=+0.200704418 container attach bb8a71d1b6a089c1dca83f302255b9be718777aa530c173a8a87f90c0d27fbda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 18:48:49 compute-0 ceph-mon[75097]: pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:48:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:48:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:48:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:48:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:48:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:48:50 compute-0 stupefied_cartwright[181425]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:48:50 compute-0 stupefied_cartwright[181425]: --> All data devices are unavailable
Jan 23 18:48:50 compute-0 systemd[1]: libpod-bb8a71d1b6a089c1dca83f302255b9be718777aa530c173a8a87f90c0d27fbda.scope: Deactivated successfully.
Jan 23 18:48:50 compute-0 podman[181404]: 2026-01-23 18:48:50.170518909 +0000 UTC m=+0.693973462 container died bb8a71d1b6a089c1dca83f302255b9be718777aa530c173a8a87f90c0d27fbda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_cartwright, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:48:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c8a06fbf5a1797b70b7eeb218ff5fe82641447544caf6b2c8a9ea0426033cd8-merged.mount: Deactivated successfully.
Jan 23 18:48:50 compute-0 podman[181404]: 2026-01-23 18:48:50.228943114 +0000 UTC m=+0.752397657 container remove bb8a71d1b6a089c1dca83f302255b9be718777aa530c173a8a87f90c0d27fbda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_cartwright, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 18:48:50 compute-0 systemd[1]: libpod-conmon-bb8a71d1b6a089c1dca83f302255b9be718777aa530c173a8a87f90c0d27fbda.scope: Deactivated successfully.
Jan 23 18:48:50 compute-0 sudo[181320]: pam_unix(sudo:session): session closed for user root
Jan 23 18:48:50 compute-0 sudo[181457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:48:50 compute-0 sudo[181457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:48:50 compute-0 sudo[181457]: pam_unix(sudo:session): session closed for user root
Jan 23 18:48:50 compute-0 sudo[181482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:48:50 compute-0 sudo[181482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:48:50 compute-0 podman[181519]: 2026-01-23 18:48:50.659654968 +0000 UTC m=+0.047877830 container create 7217075259be794d9e5f30cebab61626fed7e78aafe9a40ef8be8afcb3ee4ebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 23 18:48:50 compute-0 systemd[1]: Started libpod-conmon-7217075259be794d9e5f30cebab61626fed7e78aafe9a40ef8be8afcb3ee4ebe.scope.
Jan 23 18:48:50 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:48:50 compute-0 podman[181519]: 2026-01-23 18:48:50.72706615 +0000 UTC m=+0.115288912 container init 7217075259be794d9e5f30cebab61626fed7e78aafe9a40ef8be8afcb3ee4ebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Jan 23 18:48:50 compute-0 ceph-mon[75097]: pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:50 compute-0 podman[181519]: 2026-01-23 18:48:50.733291247 +0000 UTC m=+0.121513989 container start 7217075259be794d9e5f30cebab61626fed7e78aafe9a40ef8be8afcb3ee4ebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_tu, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:48:50 compute-0 podman[181519]: 2026-01-23 18:48:50.640840503 +0000 UTC m=+0.029063275 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:48:50 compute-0 admiring_tu[181536]: 167 167
Jan 23 18:48:50 compute-0 podman[181519]: 2026-01-23 18:48:50.737715569 +0000 UTC m=+0.125938331 container attach 7217075259be794d9e5f30cebab61626fed7e78aafe9a40ef8be8afcb3ee4ebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_tu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:48:50 compute-0 systemd[1]: libpod-7217075259be794d9e5f30cebab61626fed7e78aafe9a40ef8be8afcb3ee4ebe.scope: Deactivated successfully.
Jan 23 18:48:50 compute-0 podman[181519]: 2026-01-23 18:48:50.738721605 +0000 UTC m=+0.126944337 container died 7217075259be794d9e5f30cebab61626fed7e78aafe9a40ef8be8afcb3ee4ebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_tu, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:48:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8df878ee6a8a2bc19f0d1af3b699a28b2346a64338bccbd57eb4911005c38fb7-merged.mount: Deactivated successfully.
Jan 23 18:48:50 compute-0 podman[181519]: 2026-01-23 18:48:50.779848703 +0000 UTC m=+0.168071475 container remove 7217075259be794d9e5f30cebab61626fed7e78aafe9a40ef8be8afcb3ee4ebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:48:50 compute-0 systemd[1]: libpod-conmon-7217075259be794d9e5f30cebab61626fed7e78aafe9a40ef8be8afcb3ee4ebe.scope: Deactivated successfully.
Jan 23 18:48:50 compute-0 podman[181561]: 2026-01-23 18:48:50.990031899 +0000 UTC m=+0.066740376 container create a8cbe1f657c684049b15514c979c8723fb77c15b044ebba4e675a17d36242505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 23 18:48:51 compute-0 systemd[1]: Started libpod-conmon-a8cbe1f657c684049b15514c979c8723fb77c15b044ebba4e675a17d36242505.scope.
Jan 23 18:48:51 compute-0 podman[181561]: 2026-01-23 18:48:50.954976654 +0000 UTC m=+0.031685201 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:48:51 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:48:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52ad26162191e8c1c3535b7f8458562c673ecc6d796fc6ac2a3d904f86d9f23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:48:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52ad26162191e8c1c3535b7f8458562c673ecc6d796fc6ac2a3d904f86d9f23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:48:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52ad26162191e8c1c3535b7f8458562c673ecc6d796fc6ac2a3d904f86d9f23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:48:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52ad26162191e8c1c3535b7f8458562c673ecc6d796fc6ac2a3d904f86d9f23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:48:51 compute-0 podman[181561]: 2026-01-23 18:48:51.071887926 +0000 UTC m=+0.148596403 container init a8cbe1f657c684049b15514c979c8723fb77c15b044ebba4e675a17d36242505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_tesla, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:48:51 compute-0 podman[181561]: 2026-01-23 18:48:51.078439611 +0000 UTC m=+0.155148068 container start a8cbe1f657c684049b15514c979c8723fb77c15b044ebba4e675a17d36242505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_tesla, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:48:51 compute-0 podman[181561]: 2026-01-23 18:48:51.082475903 +0000 UTC m=+0.159184380 container attach a8cbe1f657c684049b15514c979c8723fb77c15b044ebba4e675a17d36242505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_tesla, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 23 18:48:51 compute-0 youthful_tesla[181577]: {
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:     "0": [
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:         {
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "devices": [
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "/dev/loop3"
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             ],
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "lv_name": "ceph_lv0",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "lv_size": "21470642176",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "name": "ceph_lv0",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "tags": {
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.cluster_name": "ceph",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.crush_device_class": "",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.encrypted": "0",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.objectstore": "bluestore",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.osd_id": "0",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.type": "block",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.vdo": "0",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.with_tpm": "0"
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             },
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "type": "block",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "vg_name": "ceph_vg0"
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:         }
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:     ],
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:     "1": [
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:         {
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "devices": [
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "/dev/loop4"
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             ],
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "lv_name": "ceph_lv1",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "lv_size": "21470642176",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "name": "ceph_lv1",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "tags": {
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.cluster_name": "ceph",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.crush_device_class": "",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.encrypted": "0",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.objectstore": "bluestore",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.osd_id": "1",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.type": "block",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.vdo": "0",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.with_tpm": "0"
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             },
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "type": "block",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "vg_name": "ceph_vg1"
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:         }
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:     ],
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:     "2": [
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:         {
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "devices": [
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "/dev/loop5"
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             ],
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "lv_name": "ceph_lv2",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "lv_size": "21470642176",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "name": "ceph_lv2",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "tags": {
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.cluster_name": "ceph",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.crush_device_class": "",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.encrypted": "0",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.objectstore": "bluestore",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.osd_id": "2",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.type": "block",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.vdo": "0",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:                 "ceph.with_tpm": "0"
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             },
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "type": "block",
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:             "vg_name": "ceph_vg2"
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:         }
Jan 23 18:48:51 compute-0 youthful_tesla[181577]:     ]
Jan 23 18:48:51 compute-0 youthful_tesla[181577]: }
Jan 23 18:48:51 compute-0 systemd[1]: libpod-a8cbe1f657c684049b15514c979c8723fb77c15b044ebba4e675a17d36242505.scope: Deactivated successfully.
Jan 23 18:48:51 compute-0 podman[181561]: 2026-01-23 18:48:51.416163607 +0000 UTC m=+0.492872074 container died a8cbe1f657c684049b15514c979c8723fb77c15b044ebba4e675a17d36242505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_tesla, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 23 18:48:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c52ad26162191e8c1c3535b7f8458562c673ecc6d796fc6ac2a3d904f86d9f23-merged.mount: Deactivated successfully.
Jan 23 18:48:51 compute-0 podman[181561]: 2026-01-23 18:48:51.457660885 +0000 UTC m=+0.534369342 container remove a8cbe1f657c684049b15514c979c8723fb77c15b044ebba4e675a17d36242505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 18:48:51 compute-0 systemd[1]: libpod-conmon-a8cbe1f657c684049b15514c979c8723fb77c15b044ebba4e675a17d36242505.scope: Deactivated successfully.
Jan 23 18:48:51 compute-0 sudo[181482]: pam_unix(sudo:session): session closed for user root
Jan 23 18:48:51 compute-0 sudo[181596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:48:51 compute-0 sudo[181596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:48:51 compute-0 sudo[181596]: pam_unix(sudo:session): session closed for user root
Jan 23 18:48:51 compute-0 sudo[181621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:48:51 compute-0 sudo[181621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:48:51 compute-0 podman[181658]: 2026-01-23 18:48:51.971298453 +0000 UTC m=+0.069674060 container create 31034bda39b42f6a6f5a6e9b84ee681434b74138a18dd84b9ff9c42888d12b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:48:52 compute-0 systemd[1]: Started libpod-conmon-31034bda39b42f6a6f5a6e9b84ee681434b74138a18dd84b9ff9c42888d12b40.scope.
Jan 23 18:48:52 compute-0 podman[181658]: 2026-01-23 18:48:51.94385247 +0000 UTC m=+0.042228167 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:48:52 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:48:52 compute-0 podman[181658]: 2026-01-23 18:48:52.05833867 +0000 UTC m=+0.156714337 container init 31034bda39b42f6a6f5a6e9b84ee681434b74138a18dd84b9ff9c42888d12b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 18:48:52 compute-0 podman[181658]: 2026-01-23 18:48:52.066676751 +0000 UTC m=+0.165052388 container start 31034bda39b42f6a6f5a6e9b84ee681434b74138a18dd84b9ff9c42888d12b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hugle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 18:48:52 compute-0 wonderful_hugle[181674]: 167 167
Jan 23 18:48:52 compute-0 systemd[1]: libpod-31034bda39b42f6a6f5a6e9b84ee681434b74138a18dd84b9ff9c42888d12b40.scope: Deactivated successfully.
Jan 23 18:48:52 compute-0 podman[181658]: 2026-01-23 18:48:52.07454246 +0000 UTC m=+0.172918097 container attach 31034bda39b42f6a6f5a6e9b84ee681434b74138a18dd84b9ff9c42888d12b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hugle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 18:48:52 compute-0 podman[181658]: 2026-01-23 18:48:52.07535717 +0000 UTC m=+0.173732797 container died 31034bda39b42f6a6f5a6e9b84ee681434b74138a18dd84b9ff9c42888d12b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hugle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 23 18:48:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-686c6810fa174b65d092b964d01a5efaa5850d3fe04bcffa789bb9ebae75f50f-merged.mount: Deactivated successfully.
Jan 23 18:48:52 compute-0 podman[181658]: 2026-01-23 18:48:52.134445702 +0000 UTC m=+0.232821339 container remove 31034bda39b42f6a6f5a6e9b84ee681434b74138a18dd84b9ff9c42888d12b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:48:52 compute-0 systemd[1]: libpod-conmon-31034bda39b42f6a6f5a6e9b84ee681434b74138a18dd84b9ff9c42888d12b40.scope: Deactivated successfully.
Jan 23 18:48:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:52 compute-0 podman[181698]: 2026-01-23 18:48:52.293800175 +0000 UTC m=+0.040161494 container create d24c3c26fc0d96786774f428e3c72ee0e665c3700377b10b3b588dbf5c1105da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:48:52 compute-0 systemd[1]: Started libpod-conmon-d24c3c26fc0d96786774f428e3c72ee0e665c3700377b10b3b588dbf5c1105da.scope.
Jan 23 18:48:52 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7dbf0b5624a25370b5fe61ed04c5e317d0eb7af4debfc7de83aee848580a082/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7dbf0b5624a25370b5fe61ed04c5e317d0eb7af4debfc7de83aee848580a082/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7dbf0b5624a25370b5fe61ed04c5e317d0eb7af4debfc7de83aee848580a082/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7dbf0b5624a25370b5fe61ed04c5e317d0eb7af4debfc7de83aee848580a082/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:48:52 compute-0 podman[181698]: 2026-01-23 18:48:52.279620357 +0000 UTC m=+0.025981686 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:48:52 compute-0 podman[181698]: 2026-01-23 18:48:52.380912544 +0000 UTC m=+0.127273943 container init d24c3c26fc0d96786774f428e3c72ee0e665c3700377b10b3b588dbf5c1105da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_dhawan, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:48:52 compute-0 podman[181698]: 2026-01-23 18:48:52.388430535 +0000 UTC m=+0.134791864 container start d24c3c26fc0d96786774f428e3c72ee0e665c3700377b10b3b588dbf5c1105da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 23 18:48:52 compute-0 podman[181698]: 2026-01-23 18:48:52.405908655 +0000 UTC m=+0.152270014 container attach d24c3c26fc0d96786774f428e3c72ee0e665c3700377b10b3b588dbf5c1105da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_dhawan, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 23 18:48:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:48:53 compute-0 lvm[181791]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:48:53 compute-0 lvm[181791]: VG ceph_vg0 finished
Jan 23 18:48:53 compute-0 lvm[181793]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:48:53 compute-0 lvm[181793]: VG ceph_vg1 finished
Jan 23 18:48:53 compute-0 lvm[181795]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:48:53 compute-0 lvm[181795]: VG ceph_vg2 finished
Jan 23 18:48:53 compute-0 lvm[181796]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:48:53 compute-0 lvm[181796]: VG ceph_vg0 finished
Jan 23 18:48:53 compute-0 ceph-mon[75097]: pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:53 compute-0 suspicious_dhawan[181714]: {}
Jan 23 18:48:53 compute-0 systemd[1]: libpod-d24c3c26fc0d96786774f428e3c72ee0e665c3700377b10b3b588dbf5c1105da.scope: Deactivated successfully.
Jan 23 18:48:53 compute-0 systemd[1]: libpod-d24c3c26fc0d96786774f428e3c72ee0e665c3700377b10b3b588dbf5c1105da.scope: Consumed 1.388s CPU time.
Jan 23 18:48:53 compute-0 podman[181698]: 2026-01-23 18:48:53.278470706 +0000 UTC m=+1.024832015 container died d24c3c26fc0d96786774f428e3c72ee0e665c3700377b10b3b588dbf5c1105da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_dhawan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:48:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7dbf0b5624a25370b5fe61ed04c5e317d0eb7af4debfc7de83aee848580a082-merged.mount: Deactivated successfully.
Jan 23 18:48:53 compute-0 podman[181698]: 2026-01-23 18:48:53.330468248 +0000 UTC m=+1.076829557 container remove d24c3c26fc0d96786774f428e3c72ee0e665c3700377b10b3b588dbf5c1105da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:48:53 compute-0 systemd[1]: libpod-conmon-d24c3c26fc0d96786774f428e3c72ee0e665c3700377b10b3b588dbf5c1105da.scope: Deactivated successfully.
Jan 23 18:48:53 compute-0 sudo[181621]: pam_unix(sudo:session): session closed for user root
Jan 23 18:48:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:48:53 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:48:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:48:53 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:48:53 compute-0 sudo[181813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:48:53 compute-0 sudo[181813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:48:53 compute-0 sudo[181813]: pam_unix(sudo:session): session closed for user root
Jan 23 18:48:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:54 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:48:54 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:48:54 compute-0 ceph-mon[75097]: pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:48:57 compute-0 ceph-mon[75097]: pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:48:58 compute-0 ceph-mon[75097]: pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:49:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:49:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:49:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:49:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:49:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:49:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:00 compute-0 kernel: SELinux:  Converting 2776 SID table entries...
Jan 23 18:49:00 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 18:49:00 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 23 18:49:00 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 18:49:00 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 23 18:49:00 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 18:49:00 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 18:49:00 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 18:49:01 compute-0 ceph-mon[75097]: pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:02 compute-0 groupadd[181850]: group added to /etc/group: name=dnsmasq, GID=992
Jan 23 18:49:02 compute-0 groupadd[181850]: group added to /etc/gshadow: name=dnsmasq
Jan 23 18:49:02 compute-0 groupadd[181850]: new group: name=dnsmasq, GID=992
Jan 23 18:49:02 compute-0 useradd[181857]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 23 18:49:02 compute-0 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Jan 23 18:49:02 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 23 18:49:02 compute-0 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Jan 23 18:49:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:02 compute-0 ceph-mon[75097]: pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:49:03 compute-0 groupadd[181870]: group added to /etc/group: name=clevis, GID=991
Jan 23 18:49:03 compute-0 groupadd[181870]: group added to /etc/gshadow: name=clevis
Jan 23 18:49:03 compute-0 groupadd[181870]: new group: name=clevis, GID=991
Jan 23 18:49:03 compute-0 useradd[181877]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 23 18:49:03 compute-0 usermod[181887]: add 'clevis' to group 'tss'
Jan 23 18:49:03 compute-0 usermod[181887]: add 'clevis' to shadow group 'tss'
Jan 23 18:49:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:04 compute-0 ceph-mon[75097]: pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:06 compute-0 polkitd[43413]: Reloading rules
Jan 23 18:49:06 compute-0 polkitd[43413]: Collecting garbage unconditionally...
Jan 23 18:49:06 compute-0 polkitd[43413]: Loading rules from directory /etc/polkit-1/rules.d
Jan 23 18:49:06 compute-0 polkitd[43413]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 23 18:49:06 compute-0 polkitd[43413]: Finished loading, compiling and executing 3 rules
Jan 23 18:49:06 compute-0 polkitd[43413]: Reloading rules
Jan 23 18:49:06 compute-0 polkitd[43413]: Collecting garbage unconditionally...
Jan 23 18:49:06 compute-0 polkitd[43413]: Loading rules from directory /etc/polkit-1/rules.d
Jan 23 18:49:06 compute-0 polkitd[43413]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 23 18:49:06 compute-0 polkitd[43413]: Finished loading, compiling and executing 3 rules
Jan 23 18:49:07 compute-0 ceph-mon[75097]: pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:49:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:09 compute-0 ceph-mon[75097]: pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:49:09.257259) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194149257348, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2049, "num_deletes": 251, "total_data_size": 3580202, "memory_usage": 3629376, "flush_reason": "Manual Compaction"}
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194149297511, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3492965, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9778, "largest_seqno": 11826, "table_properties": {"data_size": 3483581, "index_size": 5942, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 17971, "raw_average_key_size": 19, "raw_value_size": 3465107, "raw_average_value_size": 3758, "num_data_blocks": 269, "num_entries": 922, "num_filter_entries": 922, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193915, "oldest_key_time": 1769193915, "file_creation_time": 1769194149, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 40507 microseconds, and 14309 cpu microseconds.
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:49:09.297780) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3492965 bytes OK
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:49:09.297868) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:49:09.300703) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:49:09.300731) EVENT_LOG_v1 {"time_micros": 1769194149300723, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:49:09.300755) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3571632, prev total WAL file size 3571632, number of live WAL files 2.
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:49:09.303032) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3411KB)], [26(6271KB)]
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194149303124, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9915365, "oldest_snapshot_seqno": -1}
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3758 keys, 8278183 bytes, temperature: kUnknown
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194149425431, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8278183, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8249129, "index_size": 18576, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 90324, "raw_average_key_size": 24, "raw_value_size": 8177386, "raw_average_value_size": 2175, "num_data_blocks": 804, "num_entries": 3758, "num_filter_entries": 3758, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 1769194149, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:49:09.425672) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8278183 bytes
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:49:09.427757) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 81.0 rd, 67.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.1 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(5.2) write-amplify(2.4) OK, records in: 4272, records dropped: 514 output_compression: NoCompression
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:49:09.427788) EVENT_LOG_v1 {"time_micros": 1769194149427763, "job": 10, "event": "compaction_finished", "compaction_time_micros": 122372, "compaction_time_cpu_micros": 25305, "output_level": 6, "num_output_files": 1, "total_output_size": 8278183, "num_input_records": 4272, "num_output_records": 3758, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194149428298, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194149429219, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:49:09.302871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:49:09.429345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:49:09.429354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:49:09.429358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:49:09.429361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:49:09 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:49:09.429364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:49:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:10 compute-0 podman[182079]: 2026-01-23 18:49:10.628659426 +0000 UTC m=+0.120034801 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 18:49:11 compute-0 ceph-mon[75097]: pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:12 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 23 18:49:12 compute-0 sshd[1008]: Received signal 15; terminating.
Jan 23 18:49:12 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 23 18:49:12 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 23 18:49:12 compute-0 systemd[1]: sshd.service: Consumed 2.208s CPU time, read 32.0K from disk, written 4.0K to disk.
Jan 23 18:49:12 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 23 18:49:12 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 23 18:49:12 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 18:49:12 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 18:49:12 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 18:49:12 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 23 18:49:12 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 23 18:49:12 compute-0 sshd[182719]: Server listening on 0.0.0.0 port 22.
Jan 23 18:49:12 compute-0 sshd[182719]: Server listening on :: port 22.
Jan 23 18:49:12 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 23 18:49:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:49:13 compute-0 ceph-mon[75097]: pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:49:13.566 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:49:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:49:13.567 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:49:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:49:13.567 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:49:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:14 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 18:49:14 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 18:49:14 compute-0 systemd[1]: Reloading.
Jan 23 18:49:14 compute-0 systemd-rc-local-generator[182973]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:49:14 compute-0 systemd-sysv-generator[182979]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:49:14 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 18:49:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:17 compute-0 ceph-mon[75097]: pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:49:18 compute-0 sudo[163076]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:18 compute-0 ceph-mon[75097]: pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:18 compute-0 sudo[187633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehroxtyshtwchsyxokgbsyeufwjqfcko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194158.24038-331-128190192438942/AnsiballZ_systemd.py'
Jan 23 18:49:18 compute-0 sudo[187633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:18 compute-0 podman[187546]: 2026-01-23 18:49:18.891531388 +0000 UTC m=+0.060937850 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 23 18:49:19 compute-0 python3.9[187660]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 18:49:19 compute-0 systemd[1]: Reloading.
Jan 23 18:49:19 compute-0 systemd-sysv-generator[188107]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:49:19 compute-0 systemd-rc-local-generator[188101]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:49:19 compute-0 ceph-mon[75097]: pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:19 compute-0 sudo[187633]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:20 compute-0 sudo[188967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwdqfugticqbmhakzleaouottlstswdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194159.7996671-331-33992100714628/AnsiballZ_systemd.py'
Jan 23 18:49:20 compute-0 sudo[188967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:20 compute-0 ceph-mon[75097]: pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:20 compute-0 python3.9[188983]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 18:49:20 compute-0 systemd[1]: Reloading.
Jan 23 18:49:20 compute-0 systemd-rc-local-generator[189477]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:49:20 compute-0 systemd-sysv-generator[189482]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:49:20 compute-0 sudo[188967]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:21 compute-0 sudo[190317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qycbvpghmxfqbqpbssfpnqujhkhngtgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194161.0170987-331-108143668985125/AnsiballZ_systemd.py'
Jan 23 18:49:21 compute-0 sudo[190317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:21 compute-0 python3.9[190342]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 18:49:21 compute-0 systemd[1]: Reloading.
Jan 23 18:49:21 compute-0 systemd-rc-local-generator[190795]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:49:21 compute-0 systemd-sysv-generator[190799]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:49:22 compute-0 sudo[190317]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:22 compute-0 sudo[191457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-absybzjjioylivzlzjuubolgerpsedsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194162.1411602-331-107952655943977/AnsiballZ_systemd.py'
Jan 23 18:49:22 compute-0 sudo[191457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:49:22 compute-0 python3.9[191483]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 18:49:22 compute-0 systemd[1]: Reloading.
Jan 23 18:49:22 compute-0 systemd-rc-local-generator[191924]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:49:22 compute-0 systemd-sysv-generator[191928]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:49:23 compute-0 sudo[191457]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:23 compute-0 ceph-mon[75097]: pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:23 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 18:49:23 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 18:49:23 compute-0 systemd[1]: man-db-cache-update.service: Consumed 11.102s CPU time.
Jan 23 18:49:23 compute-0 systemd[1]: run-r602bec7e63024327adb3de8bfb8f2a41.service: Deactivated successfully.
Jan 23 18:49:23 compute-0 sudo[192289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyvtvmhenkyqkfvzhhrpqwynydliicfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194163.33864-360-269439502992852/AnsiballZ_systemd.py'
Jan 23 18:49:23 compute-0 sudo[192289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:23 compute-0 python3.9[192291]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:24 compute-0 systemd[1]: Reloading.
Jan 23 18:49:24 compute-0 systemd-sysv-generator[192321]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:49:24 compute-0 systemd-rc-local-generator[192317]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:49:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:24 compute-0 sudo[192289]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:24 compute-0 sudo[192479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmgqsoimdaksznubpmzkdmeujhothjqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194164.5522256-360-271646554888917/AnsiballZ_systemd.py'
Jan 23 18:49:24 compute-0 sudo[192479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:25 compute-0 python3.9[192481]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:25 compute-0 ceph-mon[75097]: pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:25 compute-0 systemd[1]: Reloading.
Jan 23 18:49:25 compute-0 systemd-sysv-generator[192516]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:49:25 compute-0 systemd-rc-local-generator[192513]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:49:25 compute-0 sudo[192479]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:26 compute-0 sudo[192669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsmfcnggcnydvylpxdnkhdwthbwlkngr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194165.731894-360-224561649134630/AnsiballZ_systemd.py'
Jan 23 18:49:26 compute-0 sudo[192669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:26 compute-0 python3.9[192671]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:26 compute-0 systemd[1]: Reloading.
Jan 23 18:49:26 compute-0 systemd-rc-local-generator[192701]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:49:26 compute-0 systemd-sysv-generator[192704]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:49:26 compute-0 sudo[192669]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:27 compute-0 sudo[192859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hupeabkpezpwpxapxbdugtuxojzozzwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194166.961293-360-117232634200947/AnsiballZ_systemd.py'
Jan 23 18:49:27 compute-0 sudo[192859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:27 compute-0 ceph-mon[75097]: pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:27 compute-0 python3.9[192861]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:49:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:28 compute-0 sudo[192859]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:49:28
Jan 23 18:49:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:49:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:49:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', '.rgw.root', 'volumes']
Jan 23 18:49:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:49:29 compute-0 sudo[193014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbjiuyqjxmbzxtiyiizbywuyljdccfmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194168.8165667-360-147378849930745/AnsiballZ_systemd.py'
Jan 23 18:49:29 compute-0 sudo[193014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:29 compute-0 ceph-mon[75097]: pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:29 compute-0 python3.9[193016]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:29 compute-0 systemd[1]: Reloading.
Jan 23 18:49:29 compute-0 systemd-rc-local-generator[193045]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:49:29 compute-0 systemd-sysv-generator[193050]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:49:29 compute-0 sudo[193014]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:49:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:49:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:49:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:49:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:49:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:49:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:30 compute-0 sudo[193203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcjvjemdkwqirdiypgfmujeafzdwiryc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194170.049098-396-232890226386372/AnsiballZ_systemd.py'
Jan 23 18:49:30 compute-0 sudo[193203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:30 compute-0 python3.9[193205]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 18:49:30 compute-0 systemd[1]: Reloading.
Jan 23 18:49:30 compute-0 systemd-rc-local-generator[193230]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:49:30 compute-0 systemd-sysv-generator[193237]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:49:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:49:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:49:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:49:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:49:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:49:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:49:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:49:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:49:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:49:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:49:31 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 23 18:49:31 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 23 18:49:31 compute-0 sudo[193203]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:31 compute-0 ceph-mon[75097]: pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:31 compute-0 sudo[193395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfjxgehotszknxoetcynoxctzgaasjih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194171.2707772-404-84082328286205/AnsiballZ_systemd.py'
Jan 23 18:49:31 compute-0 sudo[193395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:31 compute-0 python3.9[193397]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:31 compute-0 sudo[193395]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:32 compute-0 sudo[193550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhaveyslszdrxykbgtvbcnykmsvngrvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194172.125646-404-25460250945813/AnsiballZ_systemd.py'
Jan 23 18:49:32 compute-0 sudo[193550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:49:32 compute-0 python3.9[193552]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:33 compute-0 ceph-mon[75097]: pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:33 compute-0 sudo[193550]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:34 compute-0 sudo[193705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blfdznrzczbvhrzpblxfvvnbdsqvetvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194173.993952-404-72867405654282/AnsiballZ_systemd.py'
Jan 23 18:49:34 compute-0 sudo[193705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:34 compute-0 python3.9[193707]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:35 compute-0 ceph-mon[75097]: pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:35 compute-0 sudo[193705]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:36 compute-0 sudo[193860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncdrzdbnkrfzatlbwmcfdaafwjsfards ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194175.8722682-404-245802434741714/AnsiballZ_systemd.py'
Jan 23 18:49:36 compute-0 sudo[193860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:36 compute-0 python3.9[193862]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:36 compute-0 sudo[193860]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:36 compute-0 sudo[194015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyweqmybwpndwtlccyssqftqmgdtolgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194176.7015498-404-115273032880045/AnsiballZ_systemd.py'
Jan 23 18:49:36 compute-0 sudo[194015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:37 compute-0 python3.9[194017]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:37 compute-0 ceph-mon[75097]: pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:37 compute-0 sudo[194015]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:49:37 compute-0 sudo[194170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucbnqogihawxxcigtxhsilwpnuvqhwfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194177.5002291-404-102634243989231/AnsiballZ_systemd.py'
Jan 23 18:49:37 compute-0 sudo[194170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:38 compute-0 python3.9[194172]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:38 compute-0 sudo[194170]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:38 compute-0 sudo[194325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwrwzrfifpvhwpsshrlyitgpxdzobrvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194178.3006337-404-115989334536689/AnsiballZ_systemd.py'
Jan 23 18:49:38 compute-0 sudo[194325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:38 compute-0 python3.9[194327]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:39 compute-0 sudo[194325]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:39 compute-0 ceph-mon[75097]: pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:39 compute-0 sudo[194480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaxdswkmmegpfeheqxvircnwaxfiolav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194179.1848612-404-91977872666942/AnsiballZ_systemd.py'
Jan 23 18:49:39 compute-0 sudo[194480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:39 compute-0 python3.9[194482]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:39 compute-0 sudo[194480]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:40 compute-0 sudo[194635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgujlutblkrrhtoaclfzmnltjcyvxecv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194180.0216775-404-186267820922642/AnsiballZ_systemd.py'
Jan 23 18:49:40 compute-0 sudo[194635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:49:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:49:40 compute-0 python3.9[194637]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:40 compute-0 sudo[194635]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:40 compute-0 ceph-mon[75097]: pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:40 compute-0 podman[194641]: 2026-01-23 18:49:40.80991187 +0000 UTC m=+0.127747333 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 23 18:49:41 compute-0 sudo[194816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ineggmifitrlclvemrkbarygvvknhjnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194180.7988446-404-87686729103522/AnsiballZ_systemd.py'
Jan 23 18:49:41 compute-0 sudo[194816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:41 compute-0 python3.9[194818]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:41 compute-0 sudo[194816]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:41 compute-0 sudo[194971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-najjuqrurlynqwejvfafooarcmxvvvpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194181.6855848-404-231584320727909/AnsiballZ_systemd.py'
Jan 23 18:49:41 compute-0 sudo[194971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:42 compute-0 python3.9[194973]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:42 compute-0 sudo[194971]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:42 compute-0 sudo[195126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plbfahqwjtnkhchstlbhkvqjahxvtwog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194182.4345884-404-249773377766783/AnsiballZ_systemd.py'
Jan 23 18:49:42 compute-0 sudo[195126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:49:42 compute-0 python3.9[195128]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:43 compute-0 sudo[195126]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:43 compute-0 ceph-mon[75097]: pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:43 compute-0 sudo[195281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ometlicxnvzbtzmuygbswiabghslxgoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194183.208929-404-39778120774236/AnsiballZ_systemd.py'
Jan 23 18:49:43 compute-0 sudo[195281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:43 compute-0 python3.9[195283]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:43 compute-0 sudo[195281]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:44 compute-0 sudo[195436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bryoavlynzprttmhuisdghrbpkumajkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194184.1020796-404-124072641094813/AnsiballZ_systemd.py'
Jan 23 18:49:44 compute-0 sudo[195436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:44 compute-0 python3.9[195438]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 18:49:44 compute-0 sudo[195436]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:45 compute-0 ceph-mon[75097]: pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:45 compute-0 sudo[195591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnzxfayvzzdzuxbvdgszfslgioghutmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194185.1564665-506-195245444035891/AnsiballZ_file.py'
Jan 23 18:49:45 compute-0 sudo[195591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:45 compute-0 python3.9[195593]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:49:45 compute-0 sudo[195591]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:46 compute-0 sudo[195743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlxftigfbzmfbgoplsahqwbqgzszvfdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194185.7395785-506-106636421756285/AnsiballZ_file.py'
Jan 23 18:49:46 compute-0 sudo[195743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:46 compute-0 python3.9[195745]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:49:46 compute-0 sudo[195743]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:46 compute-0 sudo[195895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzxmuabvqfpmionqgrraqkeqhhzuceuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194186.3761818-506-77695023776392/AnsiballZ_file.py'
Jan 23 18:49:46 compute-0 sudo[195895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:46 compute-0 python3.9[195897]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:49:46 compute-0 sudo[195895]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:47 compute-0 ceph-mon[75097]: pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:47 compute-0 sudo[196047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqvckxccdkluhcjpeacdbsndbfylbgoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194187.007699-506-190135084053113/AnsiballZ_file.py'
Jan 23 18:49:47 compute-0 sudo[196047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:47 compute-0 python3.9[196049]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:49:47 compute-0 sudo[196047]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:49:47 compute-0 sudo[196199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abakhpazpajkviriaxjgexrffkqawviw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194187.6767483-506-276793553341988/AnsiballZ_file.py'
Jan 23 18:49:47 compute-0 sudo[196199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:48 compute-0 python3.9[196201]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:49:48 compute-0 sudo[196199]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:48 compute-0 sudo[196351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peretxhywlqaigccwavwppmfwnhfrdto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194188.338489-506-254010944610705/AnsiballZ_file.py'
Jan 23 18:49:48 compute-0 sudo[196351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:48 compute-0 python3.9[196353]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:49:48 compute-0 sudo[196351]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:49 compute-0 ceph-mon[75097]: pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:49 compute-0 podman[196477]: 2026-01-23 18:49:49.755622465 +0000 UTC m=+0.074568583 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 18:49:50 compute-0 python3.9[196518]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:49:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:50 compute-0 sudo[196674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxkojszfuujznkviupjhmbpcsbccxgvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194190.2394867-557-244696089630207/AnsiballZ_stat.py'
Jan 23 18:49:50 compute-0 ceph-mon[75097]: pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:50 compute-0 sudo[196674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:50 compute-0 python3.9[196676]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:49:50 compute-0 sudo[196674]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:51 compute-0 sudo[196799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbqvbobdhkxcjfrccetulucskuoaybmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194190.2394867-557-244696089630207/AnsiballZ_copy.py'
Jan 23 18:49:51 compute-0 sudo[196799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:51 compute-0 python3.9[196801]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769194190.2394867-557-244696089630207/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:49:51 compute-0 sudo[196799]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:52 compute-0 sudo[196951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsydgccsmepmwmdtovcwpkkugdrgjvnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194191.7737243-557-3660014885157/AnsiballZ_stat.py'
Jan 23 18:49:52 compute-0 sudo[196951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:52 compute-0 python3.9[196953]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:49:52 compute-0 sudo[196951]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:52 compute-0 sudo[197076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cczghglcbacclhnetejkvvzyjogdrocy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194191.7737243-557-3660014885157/AnsiballZ_copy.py'
Jan 23 18:49:52 compute-0 sudo[197076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:49:52 compute-0 python3.9[197078]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769194191.7737243-557-3660014885157/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:49:52 compute-0 sudo[197076]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:53 compute-0 sudo[197228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znixhondxrsbczicbhlqvljczonrnwec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194192.9090714-557-32136437438209/AnsiballZ_stat.py'
Jan 23 18:49:53 compute-0 sudo[197228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:53 compute-0 ceph-mon[75097]: pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:53 compute-0 python3.9[197230]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:49:53 compute-0 sudo[197228]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:53 compute-0 sudo[197276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:49:53 compute-0 sudo[197276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:49:53 compute-0 sudo[197276]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:53 compute-0 sudo[197328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:49:53 compute-0 sudo[197328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:49:53 compute-0 sudo[197403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sygvzxxanjzpzihfrhgzfvaoejhdvoae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194192.9090714-557-32136437438209/AnsiballZ_copy.py'
Jan 23 18:49:53 compute-0 sudo[197403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:53 compute-0 python3.9[197405]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769194192.9090714-557-32136437438209/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:49:53 compute-0 sudo[197403]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:54 compute-0 sudo[197328]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:49:54 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:49:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:49:54 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:49:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:49:54 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:49:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:49:54 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:49:54 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:49:54 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:49:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:49:54 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:49:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:49:54 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:49:54 compute-0 sudo[197537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:49:54 compute-0 sudo[197537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:49:54 compute-0 sudo[197537]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:54 compute-0 sudo[197579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:49:54 compute-0 sudo[197579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:49:54 compute-0 sudo[197637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slpxiyygaagjktfwilmvediggnunornz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194194.1350372-557-239013451545853/AnsiballZ_stat.py'
Jan 23 18:49:54 compute-0 sudo[197637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:54 compute-0 python3.9[197639]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:49:54 compute-0 sudo[197637]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:54 compute-0 podman[197653]: 2026-01-23 18:49:54.693739398 +0000 UTC m=+0.051786928 container create bc358a1d1e1951835cddf25915bfe01b3aeb028a9413774eee5311c2254b2c19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_meitner, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 18:49:54 compute-0 systemd[1]: Started libpod-conmon-bc358a1d1e1951835cddf25915bfe01b3aeb028a9413774eee5311c2254b2c19.scope.
Jan 23 18:49:54 compute-0 podman[197653]: 2026-01-23 18:49:54.672518099 +0000 UTC m=+0.030565659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:49:54 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:49:54 compute-0 podman[197653]: 2026-01-23 18:49:54.799913413 +0000 UTC m=+0.157960973 container init bc358a1d1e1951835cddf25915bfe01b3aeb028a9413774eee5311c2254b2c19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_meitner, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 18:49:54 compute-0 podman[197653]: 2026-01-23 18:49:54.809744014 +0000 UTC m=+0.167791564 container start bc358a1d1e1951835cddf25915bfe01b3aeb028a9413774eee5311c2254b2c19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 23 18:49:54 compute-0 podman[197653]: 2026-01-23 18:49:54.814693291 +0000 UTC m=+0.172740981 container attach bc358a1d1e1951835cddf25915bfe01b3aeb028a9413774eee5311c2254b2c19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_meitner, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 18:49:54 compute-0 elegant_meitner[197694]: 167 167
Jan 23 18:49:54 compute-0 systemd[1]: libpod-bc358a1d1e1951835cddf25915bfe01b3aeb028a9413774eee5311c2254b2c19.scope: Deactivated successfully.
Jan 23 18:49:54 compute-0 conmon[197694]: conmon bc358a1d1e1951835cdd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc358a1d1e1951835cddf25915bfe01b3aeb028a9413774eee5311c2254b2c19.scope/container/memory.events
Jan 23 18:49:54 compute-0 podman[197653]: 2026-01-23 18:49:54.819406691 +0000 UTC m=+0.177454241 container died bc358a1d1e1951835cddf25915bfe01b3aeb028a9413774eee5311c2254b2c19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 18:49:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-185da0a2adab5db916d16410e7ee68a72aa2e513e9ec24075d9711ed124b6755-merged.mount: Deactivated successfully.
Jan 23 18:49:54 compute-0 podman[197653]: 2026-01-23 18:49:54.871909775 +0000 UTC m=+0.229957305 container remove bc358a1d1e1951835cddf25915bfe01b3aeb028a9413774eee5311c2254b2c19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_meitner, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 23 18:49:54 compute-0 systemd[1]: libpod-conmon-bc358a1d1e1951835cddf25915bfe01b3aeb028a9413774eee5311c2254b2c19.scope: Deactivated successfully.
Jan 23 18:49:55 compute-0 sudo[197827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fouexzefrettsgonobipwhymbfsedroc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194194.1350372-557-239013451545853/AnsiballZ_copy.py'
Jan 23 18:49:55 compute-0 sudo[197827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:55 compute-0 podman[197794]: 2026-01-23 18:49:55.032778705 +0000 UTC m=+0.044044205 container create e86ef219345ca16d5eaf32e0644c8d7f115e161beee1c0554a58fce89e287a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:49:55 compute-0 systemd[1]: Started libpod-conmon-e86ef219345ca16d5eaf32e0644c8d7f115e161beee1c0554a58fce89e287a9f.scope.
Jan 23 18:49:55 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:49:55 compute-0 podman[197794]: 2026-01-23 18:49:55.008949315 +0000 UTC m=+0.020214845 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:49:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b09621fdfe467c16ce4ef8260759f83c80c5cc6abaab1b1134059f50c8cce87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:49:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b09621fdfe467c16ce4ef8260759f83c80c5cc6abaab1b1134059f50c8cce87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:49:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b09621fdfe467c16ce4ef8260759f83c80c5cc6abaab1b1134059f50c8cce87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:49:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b09621fdfe467c16ce4ef8260759f83c80c5cc6abaab1b1134059f50c8cce87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:49:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b09621fdfe467c16ce4ef8260759f83c80c5cc6abaab1b1134059f50c8cce87/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:49:55 compute-0 podman[197794]: 2026-01-23 18:49:55.126817775 +0000 UTC m=+0.138083315 container init e86ef219345ca16d5eaf32e0644c8d7f115e161beee1c0554a58fce89e287a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 18:49:55 compute-0 podman[197794]: 2026-01-23 18:49:55.145349291 +0000 UTC m=+0.156614791 container start e86ef219345ca16d5eaf32e0644c8d7f115e161beee1c0554a58fce89e287a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_galois, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:49:55 compute-0 podman[197794]: 2026-01-23 18:49:55.149847947 +0000 UTC m=+0.161113447 container attach e86ef219345ca16d5eaf32e0644c8d7f115e161beee1c0554a58fce89e287a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_galois, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:49:55 compute-0 python3.9[197830]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769194194.1350372-557-239013451545853/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:49:55 compute-0 ceph-mon[75097]: pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:49:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:49:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:49:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:49:55 compute-0 sudo[197827]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:55 compute-0 competent_galois[197834]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:49:55 compute-0 competent_galois[197834]: --> All data devices are unavailable
Jan 23 18:49:55 compute-0 systemd[1]: libpod-e86ef219345ca16d5eaf32e0644c8d7f115e161beee1c0554a58fce89e287a9f.scope: Deactivated successfully.
Jan 23 18:49:55 compute-0 podman[197794]: 2026-01-23 18:49:55.639517665 +0000 UTC m=+0.650783185 container died e86ef219345ca16d5eaf32e0644c8d7f115e161beee1c0554a58fce89e287a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:49:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b09621fdfe467c16ce4ef8260759f83c80c5cc6abaab1b1134059f50c8cce87-merged.mount: Deactivated successfully.
Jan 23 18:49:55 compute-0 podman[197794]: 2026-01-23 18:49:55.697884696 +0000 UTC m=+0.709150186 container remove e86ef219345ca16d5eaf32e0644c8d7f115e161beee1c0554a58fce89e287a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_galois, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 18:49:55 compute-0 systemd[1]: libpod-conmon-e86ef219345ca16d5eaf32e0644c8d7f115e161beee1c0554a58fce89e287a9f.scope: Deactivated successfully.
Jan 23 18:49:55 compute-0 sudo[198014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgbefnamycxiaahuhynvdcvhrvoiicmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194195.4114418-557-39934198778845/AnsiballZ_stat.py'
Jan 23 18:49:55 compute-0 sudo[198014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:55 compute-0 sudo[197579]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:55 compute-0 sudo[198017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:49:55 compute-0 sudo[198017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:49:55 compute-0 sudo[198017]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:55 compute-0 sudo[198042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:49:55 compute-0 sudo[198042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:49:55 compute-0 python3.9[198016]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:49:55 compute-0 sudo[198014]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:56 compute-0 podman[198125]: 2026-01-23 18:49:56.11930713 +0000 UTC m=+0.037113433 container create dd2d2757a20f7aac23040754e0476e64f5139982f532b23e80f04949c3ae1c3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_newton, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 23 18:49:56 compute-0 systemd[1]: Started libpod-conmon-dd2d2757a20f7aac23040754e0476e64f5139982f532b23e80f04949c3ae1c3d.scope.
Jan 23 18:49:56 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:49:56 compute-0 podman[198125]: 2026-01-23 18:49:56.102465904 +0000 UTC m=+0.020272237 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:49:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:56 compute-0 podman[198125]: 2026-01-23 18:49:56.202421874 +0000 UTC m=+0.120228197 container init dd2d2757a20f7aac23040754e0476e64f5139982f532b23e80f04949c3ae1c3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:49:56 compute-0 podman[198125]: 2026-01-23 18:49:56.212229464 +0000 UTC m=+0.130035777 container start dd2d2757a20f7aac23040754e0476e64f5139982f532b23e80f04949c3ae1c3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_newton, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:49:56 compute-0 crazy_newton[198168]: 167 167
Jan 23 18:49:56 compute-0 systemd[1]: libpod-dd2d2757a20f7aac23040754e0476e64f5139982f532b23e80f04949c3ae1c3d.scope: Deactivated successfully.
Jan 23 18:49:56 compute-0 conmon[198168]: conmon dd2d2757a20f7aac2304 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd2d2757a20f7aac23040754e0476e64f5139982f532b23e80f04949c3ae1c3d.scope/container/memory.events
Jan 23 18:49:56 compute-0 podman[198125]: 2026-01-23 18:49:56.216620338 +0000 UTC m=+0.134426651 container attach dd2d2757a20f7aac23040754e0476e64f5139982f532b23e80f04949c3ae1c3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:49:56 compute-0 podman[198125]: 2026-01-23 18:49:56.217089188 +0000 UTC m=+0.134895501 container died dd2d2757a20f7aac23040754e0476e64f5139982f532b23e80f04949c3ae1c3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_newton, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 23 18:49:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-761d0a01addba9a094a91bd146f9128ecf83383eea6084e61f9355ed32200ee6-merged.mount: Deactivated successfully.
Jan 23 18:49:56 compute-0 podman[198125]: 2026-01-23 18:49:56.254599 +0000 UTC m=+0.172405303 container remove dd2d2757a20f7aac23040754e0476e64f5139982f532b23e80f04949c3ae1c3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_newton, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:49:56 compute-0 systemd[1]: libpod-conmon-dd2d2757a20f7aac23040754e0476e64f5139982f532b23e80f04949c3ae1c3d.scope: Deactivated successfully.
Jan 23 18:49:56 compute-0 sudo[198233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rztawzjvyovxnqjcczmpgwiymmuljpyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194195.4114418-557-39934198778845/AnsiballZ_copy.py'
Jan 23 18:49:56 compute-0 sudo[198233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:56 compute-0 podman[198243]: 2026-01-23 18:49:56.442834503 +0000 UTC m=+0.060328458 container create 4da82388bf83fea8965c0272d7ed3a881dc94e7a9ba0f4ddefb16af8b4e9c32d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 23 18:49:56 compute-0 python3.9[198237]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769194195.4114418-557-39934198778845/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:49:56 compute-0 systemd[1]: Started libpod-conmon-4da82388bf83fea8965c0272d7ed3a881dc94e7a9ba0f4ddefb16af8b4e9c32d.scope.
Jan 23 18:49:56 compute-0 podman[198243]: 2026-01-23 18:49:56.411887316 +0000 UTC m=+0.029381241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:49:56 compute-0 sudo[198233]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:56 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:49:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2c8ce2d991d4daf9c89f3da7c5cdc337fffd2c369a7fb6d2a9a37d8f98117a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:49:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2c8ce2d991d4daf9c89f3da7c5cdc337fffd2c369a7fb6d2a9a37d8f98117a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:49:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2c8ce2d991d4daf9c89f3da7c5cdc337fffd2c369a7fb6d2a9a37d8f98117a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:49:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2c8ce2d991d4daf9c89f3da7c5cdc337fffd2c369a7fb6d2a9a37d8f98117a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:49:56 compute-0 podman[198243]: 2026-01-23 18:49:56.548409965 +0000 UTC m=+0.165903950 container init 4da82388bf83fea8965c0272d7ed3a881dc94e7a9ba0f4ddefb16af8b4e9c32d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khorana, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:49:56 compute-0 podman[198243]: 2026-01-23 18:49:56.556653909 +0000 UTC m=+0.174147804 container start 4da82388bf83fea8965c0272d7ed3a881dc94e7a9ba0f4ddefb16af8b4e9c32d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khorana, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 23 18:49:56 compute-0 podman[198243]: 2026-01-23 18:49:56.560045328 +0000 UTC m=+0.177539253 container attach 4da82388bf83fea8965c0272d7ed3a881dc94e7a9ba0f4ddefb16af8b4e9c32d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khorana, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 23 18:49:56 compute-0 serene_khorana[198260]: {
Jan 23 18:49:56 compute-0 serene_khorana[198260]:     "0": [
Jan 23 18:49:56 compute-0 serene_khorana[198260]:         {
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "devices": [
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "/dev/loop3"
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             ],
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "lv_name": "ceph_lv0",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "lv_size": "21470642176",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "name": "ceph_lv0",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "tags": {
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.cluster_name": "ceph",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.crush_device_class": "",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.encrypted": "0",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.objectstore": "bluestore",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.osd_id": "0",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.type": "block",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.vdo": "0",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.with_tpm": "0"
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             },
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "type": "block",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "vg_name": "ceph_vg0"
Jan 23 18:49:56 compute-0 serene_khorana[198260]:         }
Jan 23 18:49:56 compute-0 serene_khorana[198260]:     ],
Jan 23 18:49:56 compute-0 serene_khorana[198260]:     "1": [
Jan 23 18:49:56 compute-0 serene_khorana[198260]:         {
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "devices": [
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "/dev/loop4"
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             ],
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "lv_name": "ceph_lv1",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "lv_size": "21470642176",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "name": "ceph_lv1",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "tags": {
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.cluster_name": "ceph",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.crush_device_class": "",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.encrypted": "0",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.objectstore": "bluestore",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.osd_id": "1",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.type": "block",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.vdo": "0",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.with_tpm": "0"
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             },
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "type": "block",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "vg_name": "ceph_vg1"
Jan 23 18:49:56 compute-0 serene_khorana[198260]:         }
Jan 23 18:49:56 compute-0 serene_khorana[198260]:     ],
Jan 23 18:49:56 compute-0 serene_khorana[198260]:     "2": [
Jan 23 18:49:56 compute-0 serene_khorana[198260]:         {
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "devices": [
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "/dev/loop5"
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             ],
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "lv_name": "ceph_lv2",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "lv_size": "21470642176",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "name": "ceph_lv2",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "tags": {
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.cluster_name": "ceph",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.crush_device_class": "",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.encrypted": "0",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.objectstore": "bluestore",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.osd_id": "2",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.type": "block",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.vdo": "0",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:                 "ceph.with_tpm": "0"
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             },
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "type": "block",
Jan 23 18:49:56 compute-0 serene_khorana[198260]:             "vg_name": "ceph_vg2"
Jan 23 18:49:56 compute-0 serene_khorana[198260]:         }
Jan 23 18:49:56 compute-0 serene_khorana[198260]:     ]
Jan 23 18:49:56 compute-0 serene_khorana[198260]: }
Jan 23 18:49:56 compute-0 systemd[1]: libpod-4da82388bf83fea8965c0272d7ed3a881dc94e7a9ba0f4ddefb16af8b4e9c32d.scope: Deactivated successfully.
Jan 23 18:49:56 compute-0 podman[198243]: 2026-01-23 18:49:56.875880611 +0000 UTC m=+0.493374536 container died 4da82388bf83fea8965c0272d7ed3a881dc94e7a9ba0f4ddefb16af8b4e9c32d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khorana, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 23 18:49:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac2c8ce2d991d4daf9c89f3da7c5cdc337fffd2c369a7fb6d2a9a37d8f98117a-merged.mount: Deactivated successfully.
Jan 23 18:49:56 compute-0 sudo[198427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzjamnbmdbkijhivbwdizewqxibgxvzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194196.6522443-557-126905942466924/AnsiballZ_stat.py'
Jan 23 18:49:56 compute-0 sudo[198427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:56 compute-0 podman[198243]: 2026-01-23 18:49:56.925695751 +0000 UTC m=+0.543189656 container remove 4da82388bf83fea8965c0272d7ed3a881dc94e7a9ba0f4ddefb16af8b4e9c32d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:49:56 compute-0 systemd[1]: libpod-conmon-4da82388bf83fea8965c0272d7ed3a881dc94e7a9ba0f4ddefb16af8b4e9c32d.scope: Deactivated successfully.
Jan 23 18:49:56 compute-0 sudo[198042]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:57 compute-0 sudo[198435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:49:57 compute-0 sudo[198435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:49:57 compute-0 sudo[198435]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:57 compute-0 python3.9[198434]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:49:57 compute-0 sudo[198460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:49:57 compute-0 sudo[198460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:49:57 compute-0 sudo[198427]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:57 compute-0 sudo[198633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwjxdamhjeprbhklyefypmukizoodzor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194196.6522443-557-126905942466924/AnsiballZ_copy.py'
Jan 23 18:49:57 compute-0 sudo[198633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:57 compute-0 podman[198569]: 2026-01-23 18:49:57.373577867 +0000 UTC m=+0.017328088 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:49:57 compute-0 python3.9[198635]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769194196.6522443-557-126905942466924/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:49:57 compute-0 sudo[198633]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:49:57 compute-0 ceph-mon[75097]: pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:57 compute-0 podman[198569]: 2026-01-23 18:49:57.757758456 +0000 UTC m=+0.401508697 container create d351a1f95aa2c1bd3ce1768c813c983403464cb71e641a54db3d3d2c5577f3fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 23 18:49:57 compute-0 systemd[1]: Started libpod-conmon-d351a1f95aa2c1bd3ce1768c813c983403464cb71e641a54db3d3d2c5577f3fd.scope.
Jan 23 18:49:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:49:57 compute-0 podman[198569]: 2026-01-23 18:49:57.981200097 +0000 UTC m=+0.624950388 container init d351a1f95aa2c1bd3ce1768c813c983403464cb71e641a54db3d3d2c5577f3fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dijkstra, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:49:57 compute-0 podman[198569]: 2026-01-23 18:49:57.990813943 +0000 UTC m=+0.634564184 container start d351a1f95aa2c1bd3ce1768c813c983403464cb71e641a54db3d3d2c5577f3fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dijkstra, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Jan 23 18:49:57 compute-0 interesting_dijkstra[198662]: 167 167
Jan 23 18:49:58 compute-0 systemd[1]: libpod-d351a1f95aa2c1bd3ce1768c813c983403464cb71e641a54db3d3d2c5577f3fd.scope: Deactivated successfully.
Jan 23 18:49:58 compute-0 podman[198569]: 2026-01-23 18:49:58.068002197 +0000 UTC m=+0.711752448 container attach d351a1f95aa2c1bd3ce1768c813c983403464cb71e641a54db3d3d2c5577f3fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 18:49:58 compute-0 podman[198569]: 2026-01-23 18:49:58.068337995 +0000 UTC m=+0.712088206 container died d351a1f95aa2c1bd3ce1768c813c983403464cb71e641a54db3d3d2c5577f3fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dijkstra, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:49:58 compute-0 sudo[198801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qohyxxtedtgjtdmasnfcyjqbtlwyzyac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194197.8521273-557-204566425164885/AnsiballZ_stat.py'
Jan 23 18:49:58 compute-0 sudo[198801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-41fbd8becfabd6ab0e2ef335c6d14095d976bdf041b5fa3e9604b419b1c67901-merged.mount: Deactivated successfully.
Jan 23 18:49:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:58 compute-0 podman[198569]: 2026-01-23 18:49:58.224417663 +0000 UTC m=+0.868167904 container remove d351a1f95aa2c1bd3ce1768c813c983403464cb71e641a54db3d3d2c5577f3fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dijkstra, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:49:58 compute-0 systemd[1]: libpod-conmon-d351a1f95aa2c1bd3ce1768c813c983403464cb71e641a54db3d3d2c5577f3fd.scope: Deactivated successfully.
Jan 23 18:49:58 compute-0 python3.9[198803]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:49:58 compute-0 sudo[198801]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:58 compute-0 podman[198812]: 2026-01-23 18:49:58.418055154 +0000 UTC m=+0.053959199 container create 663b53ed88c330a1624e8f1fbf057ac15eb7bb5c7e3ea6bbeee1702bf78e11b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 18:49:58 compute-0 systemd[1]: Started libpod-conmon-663b53ed88c330a1624e8f1fbf057ac15eb7bb5c7e3ea6bbeee1702bf78e11b1.scope.
Jan 23 18:49:58 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656adbac000682f8e3a99d6a6c66063118cf25dd7e1f6e1a0ad103786cf19d63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:49:58 compute-0 podman[198812]: 2026-01-23 18:49:58.396473737 +0000 UTC m=+0.032377822 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656adbac000682f8e3a99d6a6c66063118cf25dd7e1f6e1a0ad103786cf19d63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656adbac000682f8e3a99d6a6c66063118cf25dd7e1f6e1a0ad103786cf19d63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656adbac000682f8e3a99d6a6c66063118cf25dd7e1f6e1a0ad103786cf19d63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:49:58 compute-0 podman[198812]: 2026-01-23 18:49:58.518819652 +0000 UTC m=+0.154723737 container init 663b53ed88c330a1624e8f1fbf057ac15eb7bb5c7e3ea6bbeee1702bf78e11b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_banzai, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:49:58 compute-0 podman[198812]: 2026-01-23 18:49:58.528467369 +0000 UTC m=+0.164371404 container start 663b53ed88c330a1624e8f1fbf057ac15eb7bb5c7e3ea6bbeee1702bf78e11b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_banzai, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 23 18:49:58 compute-0 podman[198812]: 2026-01-23 18:49:58.53362244 +0000 UTC m=+0.169526485 container attach 663b53ed88c330a1624e8f1fbf057ac15eb7bb5c7e3ea6bbeee1702bf78e11b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_banzai, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 18:49:58 compute-0 sudo[198953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdszordpcwaursmmvxntdzpgowkegckv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194197.8521273-557-204566425164885/AnsiballZ_copy.py'
Jan 23 18:49:58 compute-0 sudo[198953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:58 compute-0 ceph-mon[75097]: pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:49:58 compute-0 python3.9[198955]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769194197.8521273-557-204566425164885/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:49:58 compute-0 sudo[198953]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:59 compute-0 lvm[199103]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:49:59 compute-0 lvm[199103]: VG ceph_vg0 finished
Jan 23 18:49:59 compute-0 lvm[199109]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:49:59 compute-0 lvm[199109]: VG ceph_vg1 finished
Jan 23 18:49:59 compute-0 lvm[199131]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:49:59 compute-0 lvm[199131]: VG ceph_vg2 finished
Jan 23 18:49:59 compute-0 lvm[199132]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:49:59 compute-0 lvm[199132]: VG ceph_vg1 finished
Jan 23 18:49:59 compute-0 compassionate_banzai[198856]: {}
Jan 23 18:49:59 compute-0 sudo[199185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbxsuxwgtkivbugvayyrkgcbuslmblys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194199.131618-557-249914250621311/AnsiballZ_stat.py'
Jan 23 18:49:59 compute-0 sudo[199185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:49:59 compute-0 systemd[1]: libpod-663b53ed88c330a1624e8f1fbf057ac15eb7bb5c7e3ea6bbeee1702bf78e11b1.scope: Deactivated successfully.
Jan 23 18:49:59 compute-0 systemd[1]: libpod-663b53ed88c330a1624e8f1fbf057ac15eb7bb5c7e3ea6bbeee1702bf78e11b1.scope: Consumed 1.358s CPU time.
Jan 23 18:49:59 compute-0 podman[198812]: 2026-01-23 18:49:59.401180228 +0000 UTC m=+1.037084273 container died 663b53ed88c330a1624e8f1fbf057ac15eb7bb5c7e3ea6bbeee1702bf78e11b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_banzai, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 18:49:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-656adbac000682f8e3a99d6a6c66063118cf25dd7e1f6e1a0ad103786cf19d63-merged.mount: Deactivated successfully.
Jan 23 18:49:59 compute-0 podman[198812]: 2026-01-23 18:49:59.448270555 +0000 UTC m=+1.084174590 container remove 663b53ed88c330a1624e8f1fbf057ac15eb7bb5c7e3ea6bbeee1702bf78e11b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_banzai, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 23 18:49:59 compute-0 systemd[1]: libpod-conmon-663b53ed88c330a1624e8f1fbf057ac15eb7bb5c7e3ea6bbeee1702bf78e11b1.scope: Deactivated successfully.
Jan 23 18:49:59 compute-0 sudo[198460]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:49:59 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:49:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:49:59 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:49:59 compute-0 python3.9[199187]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:49:59 compute-0 sudo[199198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:49:59 compute-0 sudo[199198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:49:59 compute-0 sudo[199198]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:59 compute-0 sudo[199185]: pam_unix(sudo:session): session closed for user root
Jan 23 18:49:59 compute-0 sudo[199345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viuztyftkesgiylnamqxgsfvmkrumtbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194199.131618-557-249914250621311/AnsiballZ_copy.py'
Jan 23 18:49:59 compute-0 sudo[199345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:50:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:50:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:50:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:50:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:50:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:50:00 compute-0 python3.9[199347]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769194199.131618-557-249914250621311/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:00 compute-0 sudo[199345]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:50:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:50:00 compute-0 ceph-mon[75097]: pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:00 compute-0 sudo[199497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcxfmexslbgwxoxwygamutqhbtjebily ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194200.2751536-670-143007259081980/AnsiballZ_command.py'
Jan 23 18:50:00 compute-0 sudo[199497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:00 compute-0 python3.9[199499]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 23 18:50:00 compute-0 sudo[199497]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:01 compute-0 sudo[199650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scotzqppzztvkvowzoocjxjruwtstwvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194200.9516397-679-103107448101906/AnsiballZ_file.py'
Jan 23 18:50:01 compute-0 sudo[199650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:01 compute-0 python3.9[199652]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:01 compute-0 sudo[199650]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:01 compute-0 sudo[199802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qszqegfuigqpzcbigorlxosovfobptro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194201.6218498-679-111251819935415/AnsiballZ_file.py'
Jan 23 18:50:01 compute-0 sudo[199802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:02 compute-0 python3.9[199804]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:02 compute-0 sudo[199802]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:02 compute-0 sudo[199954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zydyshvseqbkmcfyuvjoerbnapldqbox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194202.259719-679-28683015389386/AnsiballZ_file.py'
Jan 23 18:50:02 compute-0 sudo[199954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:02 compute-0 python3.9[199956]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:02 compute-0 sudo[199954]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:50:03 compute-0 sudo[200106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiyjrayarlozzznqpmpxvncgfyeuodzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194202.860241-679-220727393457433/AnsiballZ_file.py'
Jan 23 18:50:03 compute-0 sudo[200106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:03 compute-0 ceph-mon[75097]: pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:03 compute-0 python3.9[200108]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:03 compute-0 sudo[200106]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:03 compute-0 sudo[200258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gswjvhukgqfozmqwhryqrfdkrenlhhlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194203.4877174-679-227085803139601/AnsiballZ_file.py'
Jan 23 18:50:03 compute-0 sudo[200258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:03 compute-0 python3.9[200260]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:03 compute-0 sudo[200258]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:04 compute-0 sudo[200410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owrspavsucaswifnnauyxfigeizjigvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194204.1312058-679-57409463992491/AnsiballZ_file.py'
Jan 23 18:50:04 compute-0 sudo[200410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:04 compute-0 python3.9[200412]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:04 compute-0 sudo[200410]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:05 compute-0 sudo[200562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xahmtyifcrucijfqdgzncjqyrfypzrdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194204.7936866-679-87203206126518/AnsiballZ_file.py'
Jan 23 18:50:05 compute-0 sudo[200562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:05 compute-0 ceph-mon[75097]: pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:05 compute-0 python3.9[200564]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:05 compute-0 sudo[200562]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:05 compute-0 sudo[200714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbqcumwlgxbgidrnlciptqjmasfbwmiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194205.4802625-679-90386577239863/AnsiballZ_file.py'
Jan 23 18:50:05 compute-0 sudo[200714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:05 compute-0 python3.9[200716]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:05 compute-0 sudo[200714]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:06 compute-0 sudo[200866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trikvvxxcmwiugbyfkyawcnwtpsrugds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194206.147765-679-281449061853075/AnsiballZ_file.py'
Jan 23 18:50:06 compute-0 sudo[200866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:06 compute-0 python3.9[200868]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:06 compute-0 sudo[200866]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:07 compute-0 sudo[201018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzzggzxtprivhisvhrxjduobcsoogotj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194206.7989998-679-246913554298136/AnsiballZ_file.py'
Jan 23 18:50:07 compute-0 sudo[201018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:07 compute-0 ceph-mon[75097]: pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:07 compute-0 python3.9[201020]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:07 compute-0 sudo[201018]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:50:07 compute-0 sudo[201170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrxiwmwvudssoxgtnsmpdzfryolgwfcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194207.4871979-679-6967070583235/AnsiballZ_file.py'
Jan 23 18:50:07 compute-0 sudo[201170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:07 compute-0 python3.9[201172]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:07 compute-0 sudo[201170]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:08 compute-0 sudo[201322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgmioumktxftnlmbecxprkbctovgtrrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194208.1212575-679-214995752848483/AnsiballZ_file.py'
Jan 23 18:50:08 compute-0 sudo[201322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:08 compute-0 python3.9[201324]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:08 compute-0 sudo[201322]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:08 compute-0 sudo[201474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukbfwfjlfaazogjmszebckyobdgdxqgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194208.715012-679-247685327859486/AnsiballZ_file.py'
Jan 23 18:50:08 compute-0 sudo[201474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:09 compute-0 python3.9[201476]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:09 compute-0 sudo[201474]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:09 compute-0 ceph-mon[75097]: pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:09 compute-0 sudo[201626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrgzhiwhdtjnuitxojfvebnwtxiznbyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194209.3769712-679-53340521429414/AnsiballZ_file.py'
Jan 23 18:50:09 compute-0 sudo[201626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:09 compute-0 python3.9[201628]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:09 compute-0 sudo[201626]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:10 compute-0 sudo[201778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yurmvmaoggajmputxyeqncoqxfjhniha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194210.0188334-778-172313861325962/AnsiballZ_stat.py'
Jan 23 18:50:10 compute-0 sudo[201778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:10 compute-0 ceph-mon[75097]: pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:10 compute-0 python3.9[201780]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:10 compute-0 sudo[201778]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:10 compute-0 sudo[201920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkglgeyskpaeqdcetlchtrmbahyhacde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194210.0188334-778-172313861325962/AnsiballZ_copy.py'
Jan 23 18:50:10 compute-0 sudo[201920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:11 compute-0 podman[201875]: 2026-01-23 18:50:11.001771808 +0000 UTC m=+0.126235208 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 23 18:50:11 compute-0 python3.9[201925]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769194210.0188334-778-172313861325962/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:11 compute-0 sudo[201920]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:11 compute-0 sudo[202081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqrbikratprwsauqxvesqjofcrxkrzgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194211.2979703-778-81237064692564/AnsiballZ_stat.py'
Jan 23 18:50:11 compute-0 sudo[202081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:11 compute-0 python3.9[202083]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:11 compute-0 sudo[202081]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:12 compute-0 sudo[202204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpwxurttlhgufzrrlxdydjrhzfyseocn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194211.2979703-778-81237064692564/AnsiballZ_copy.py'
Jan 23 18:50:12 compute-0 sudo[202204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:12 compute-0 python3.9[202206]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769194211.2979703-778-81237064692564/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:12 compute-0 sudo[202204]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:50:12 compute-0 sudo[202356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyidksuvtnwzcdxxfexubzxbvzunqvnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194212.5038078-778-39433578432655/AnsiballZ_stat.py'
Jan 23 18:50:12 compute-0 sudo[202356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:12 compute-0 python3.9[202358]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:12 compute-0 sudo[202356]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:13 compute-0 sudo[202479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geiiztqnhxkhouhtxahuqtfllfgeucrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194212.5038078-778-39433578432655/AnsiballZ_copy.py'
Jan 23 18:50:13 compute-0 sudo[202479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:50:13.568 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:50:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:50:13.569 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:50:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:50:13.569 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:50:13 compute-0 python3.9[202481]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769194212.5038078-778-39433578432655/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:13 compute-0 sudo[202479]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:14 compute-0 ceph-mon[75097]: pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:14 compute-0 sudo[202631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjkmqrebsujpwenzdoduwdjbjwptkoxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194213.831881-778-227035723687271/AnsiballZ_stat.py'
Jan 23 18:50:14 compute-0 sudo[202631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:14 compute-0 python3.9[202633]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:14 compute-0 sudo[202631]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:14 compute-0 sudo[202754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmzdyiflgwxxznyodxlgrxwwacqphtmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194213.831881-778-227035723687271/AnsiballZ_copy.py'
Jan 23 18:50:14 compute-0 sudo[202754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:14 compute-0 python3.9[202756]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769194213.831881-778-227035723687271/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:14 compute-0 sudo[202754]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:15 compute-0 ceph-mon[75097]: pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:15 compute-0 sudo[202906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kloditbzpaxtbyvjbrcagsdhriuygltj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194215.05679-778-237019496673131/AnsiballZ_stat.py'
Jan 23 18:50:15 compute-0 sudo[202906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:15 compute-0 python3.9[202908]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:15 compute-0 sudo[202906]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:15 compute-0 sudo[203029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwxhbaqofocwjblvtolhjklgivlkatpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194215.05679-778-237019496673131/AnsiballZ_copy.py'
Jan 23 18:50:15 compute-0 sudo[203029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:16 compute-0 python3.9[203031]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769194215.05679-778-237019496673131/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:16 compute-0 sudo[203029]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:16 compute-0 sudo[203181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmewxfqpsccrmafnntpuamxfgyklbngv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194216.3085973-778-264626007474741/AnsiballZ_stat.py'
Jan 23 18:50:16 compute-0 sudo[203181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:16 compute-0 python3.9[203183]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:16 compute-0 sudo[203181]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:17 compute-0 ceph-mon[75097]: pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:17 compute-0 sudo[203304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snzboxbelqmleeudwouqugsrhzztbgws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194216.3085973-778-264626007474741/AnsiballZ_copy.py'
Jan 23 18:50:17 compute-0 sudo[203304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:50:17 compute-0 python3.9[203306]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769194216.3085973-778-264626007474741/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:17 compute-0 sudo[203304]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:18 compute-0 sudo[203456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgxdpwpulnpnfcsdbpthxkrqdlfcxkfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194218.0372438-778-22153667559917/AnsiballZ_stat.py'
Jan 23 18:50:18 compute-0 sudo[203456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:18 compute-0 python3.9[203458]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:18 compute-0 sudo[203456]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:18 compute-0 sudo[203579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwzmtvmahzvxhwgouyuhsxcrjcoabqtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194218.0372438-778-22153667559917/AnsiballZ_copy.py'
Jan 23 18:50:18 compute-0 sudo[203579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:19 compute-0 python3.9[203581]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769194218.0372438-778-22153667559917/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:19 compute-0 sudo[203579]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:19 compute-0 ceph-mon[75097]: pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:19 compute-0 sudo[203731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmgyxzikmpwfkfchopdxnfrfxapdtilu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194219.2872658-778-151105079599887/AnsiballZ_stat.py'
Jan 23 18:50:19 compute-0 sudo[203731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:19 compute-0 python3.9[203733]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:19 compute-0 sudo[203731]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:19 compute-0 podman[203760]: 2026-01-23 18:50:19.985150679 +0000 UTC m=+0.063241228 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 23 18:50:20 compute-0 sudo[203871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbowbedxvwftgiraasjwwkquzhlzndsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194219.2872658-778-151105079599887/AnsiballZ_copy.py'
Jan 23 18:50:20 compute-0 sudo[203871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:20 compute-0 python3.9[203873]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769194219.2872658-778-151105079599887/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:20 compute-0 sudo[203871]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:20 compute-0 ceph-mon[75097]: pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:20 compute-0 sudo[204023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkbrgxwlzodnductltlulomkilgeqmda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194220.5185058-778-166101848165045/AnsiballZ_stat.py'
Jan 23 18:50:20 compute-0 sudo[204023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:21 compute-0 python3.9[204025]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:21 compute-0 sudo[204023]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:21 compute-0 sudo[204146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqsqdsmcinavkimexyamgyxzifiietbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194220.5185058-778-166101848165045/AnsiballZ_copy.py'
Jan 23 18:50:21 compute-0 sudo[204146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:21 compute-0 python3.9[204148]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769194220.5185058-778-166101848165045/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:21 compute-0 sudo[204146]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:22 compute-0 sudo[204298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtrxbtxzuujdehibtfygzuvzgzzegcbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194221.823367-778-253995579078173/AnsiballZ_stat.py'
Jan 23 18:50:22 compute-0 sudo[204298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:22 compute-0 python3.9[204300]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:22 compute-0 sudo[204298]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:22 compute-0 sudo[204421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmazfsburexqqwestkhjqnmopzzzxhog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194221.823367-778-253995579078173/AnsiballZ_copy.py'
Jan 23 18:50:22 compute-0 sudo[204421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:50:22 compute-0 python3.9[204423]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769194221.823367-778-253995579078173/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:22 compute-0 sudo[204421]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:23 compute-0 sudo[204573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwxpzjifggulalhedzroyrmbwhpginhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194223.017546-778-269854050772287/AnsiballZ_stat.py'
Jan 23 18:50:23 compute-0 sudo[204573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:23 compute-0 python3.9[204575]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:23 compute-0 ceph-mon[75097]: pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:23 compute-0 sudo[204573]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:24 compute-0 sudo[204696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgrrsvbcwrohwicdnljzgphpjnmhyipy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194223.017546-778-269854050772287/AnsiballZ_copy.py'
Jan 23 18:50:24 compute-0 sudo[204696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:24 compute-0 python3.9[204698]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769194223.017546-778-269854050772287/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:24 compute-0 sudo[204696]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:24 compute-0 ceph-mon[75097]: pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:24 compute-0 sudo[204848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwmftkrtspfouyutwhberrfqokozhtvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194224.496868-778-188393352415241/AnsiballZ_stat.py'
Jan 23 18:50:24 compute-0 sudo[204848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:24 compute-0 python3.9[204850]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:24 compute-0 sudo[204848]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:25 compute-0 sudo[204971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbifoaplqebylkneszjpabdgfolyynnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194224.496868-778-188393352415241/AnsiballZ_copy.py'
Jan 23 18:50:25 compute-0 sudo[204971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:25 compute-0 python3.9[204973]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769194224.496868-778-188393352415241/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:25 compute-0 sudo[204971]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:25 compute-0 auditd[706]: Audit daemon rotating log files
Jan 23 18:50:26 compute-0 sudo[205123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srlvgwvgoqcickncehewccxovebhrhdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194225.69764-778-217219459898924/AnsiballZ_stat.py'
Jan 23 18:50:26 compute-0 sudo[205123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:26 compute-0 python3.9[205125]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:26 compute-0 sudo[205123]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:26 compute-0 sudo[205246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwwzicxeyppdromduiowmeamrdczokic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194225.69764-778-217219459898924/AnsiballZ_copy.py'
Jan 23 18:50:26 compute-0 sudo[205246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:26 compute-0 python3.9[205248]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769194225.69764-778-217219459898924/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:26 compute-0 sudo[205246]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:26 compute-0 ceph-mon[75097]: pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:27 compute-0 sudo[205398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xisfjwtpsikqxvunqpllbkfkjgkbania ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194226.8775022-778-173881245369301/AnsiballZ_stat.py'
Jan 23 18:50:27 compute-0 sudo[205398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:27 compute-0 python3.9[205400]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:27 compute-0 sudo[205398]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:50:27 compute-0 sudo[205521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdhdlmzsaqapqnwlbslqqgvlbhivpitk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194226.8775022-778-173881245369301/AnsiballZ_copy.py'
Jan 23 18:50:27 compute-0 sudo[205521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:28 compute-0 python3.9[205523]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769194226.8775022-778-173881245369301/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:28 compute-0 sudo[205521]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:28 compute-0 python3.9[205673]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:50:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:50:28
Jan 23 18:50:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:50:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:50:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', '.mgr', 'backups', 'images', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'vms']
Jan 23 18:50:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:50:29 compute-0 ceph-mon[75097]: pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:29 compute-0 sudo[205826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzpuciqjkgnrucplytygjekoyryxvery ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194229.1146245-984-96643802337878/AnsiballZ_seboolean.py'
Jan 23 18:50:29 compute-0 sudo[205826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:29 compute-0 python3.9[205828]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 23 18:50:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:50:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:50:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:50:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:50:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:50:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:50:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:50:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:50:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:50:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:50:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:50:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:50:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:50:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:50:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:50:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:50:31 compute-0 sudo[205826]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:31 compute-0 ceph-mon[75097]: pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:31 compute-0 sudo[205982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnbrqxlfqmhhphhccpylbppsmtcwemvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194231.2420585-992-187043478130057/AnsiballZ_copy.py'
Jan 23 18:50:31 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 23 18:50:31 compute-0 sudo[205982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:31 compute-0 python3.9[205984]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:31 compute-0 sudo[205982]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:32 compute-0 sudo[206134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kirtrugfcurbahpchrsxfgsxjkyzmjxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194231.9077072-992-73576295587066/AnsiballZ_copy.py'
Jan 23 18:50:32 compute-0 sudo[206134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:32 compute-0 python3.9[206136]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:32 compute-0 sudo[206134]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:32 compute-0 ceph-mon[75097]: pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:32 compute-0 sudo[206286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugcuteiflzwqjnsrybpficzzlpjexelx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194232.4920895-992-113107801963640/AnsiballZ_copy.py'
Jan 23 18:50:32 compute-0 sudo[206286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:50:32 compute-0 python3.9[206288]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:32 compute-0 sudo[206286]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:33 compute-0 sudo[206438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mengdbaytrwokpsenpuvpninhevburpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194233.090755-992-166630715169542/AnsiballZ_copy.py'
Jan 23 18:50:33 compute-0 sudo[206438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:33 compute-0 python3.9[206440]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:33 compute-0 sudo[206438]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:34 compute-0 sudo[206590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dexklvutuceesqjuzdzkecffcmelbtby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194233.8579607-992-47962173155356/AnsiballZ_copy.py'
Jan 23 18:50:34 compute-0 sudo[206590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:34 compute-0 python3.9[206592]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:34 compute-0 sudo[206590]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:34 compute-0 sudo[206742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcegnoehyherjopovbzmahwmmhwecrsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194234.592402-1028-151865748503292/AnsiballZ_copy.py'
Jan 23 18:50:34 compute-0 sudo[206742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:34 compute-0 ceph-mon[75097]: pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:35 compute-0 python3.9[206744]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:35 compute-0 sudo[206742]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:35 compute-0 sudo[206894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgcfzvpxruynncseqffroeyrkntqvflh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194235.3831491-1028-42163807769822/AnsiballZ_copy.py'
Jan 23 18:50:35 compute-0 sudo[206894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:35 compute-0 python3.9[206896]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:35 compute-0 sudo[206894]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:36 compute-0 sudo[207046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgmgjfsmzntsddtdkyxopkuxypapejto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194235.9984748-1028-169081303575580/AnsiballZ_copy.py'
Jan 23 18:50:36 compute-0 sudo[207046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:36 compute-0 python3.9[207048]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:36 compute-0 sudo[207046]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:36 compute-0 sudo[207198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eezukdjmdbinvhzxlasonbbqalejxggo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194236.733403-1028-7988497069614/AnsiballZ_copy.py'
Jan 23 18:50:36 compute-0 sudo[207198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:37 compute-0 python3.9[207200]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:37 compute-0 sudo[207198]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:37 compute-0 ceph-mon[75097]: pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:37 compute-0 sudo[207350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odhilonuxjgrocrfiaiwyzqwjgxjksjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194237.3840299-1028-218499333245562/AnsiballZ_copy.py'
Jan 23 18:50:37 compute-0 sudo[207350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:50:37 compute-0 python3.9[207352]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:37 compute-0 sudo[207350]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:38 compute-0 sudo[207502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgtdapgocdolwaucuwrtqotzptjnscph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194238.0621054-1064-71978416356027/AnsiballZ_systemd.py'
Jan 23 18:50:38 compute-0 sudo[207502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:38 compute-0 python3.9[207504]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 18:50:38 compute-0 systemd[1]: Reloading.
Jan 23 18:50:38 compute-0 systemd-sysv-generator[207534]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:50:38 compute-0 systemd-rc-local-generator[207530]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:50:38 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 23 18:50:39 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 23 18:50:39 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 23 18:50:39 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 23 18:50:39 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 23 18:50:39 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 23 18:50:39 compute-0 sudo[207502]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:39 compute-0 ceph-mon[75097]: pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:39 compute-0 sudo[207695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieyydocawznxwhlnbpdsuakwpglyycau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194239.2718253-1064-22490981735507/AnsiballZ_systemd.py'
Jan 23 18:50:39 compute-0 sudo[207695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:39 compute-0 python3.9[207697]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 18:50:39 compute-0 systemd[1]: Reloading.
Jan 23 18:50:39 compute-0 systemd-sysv-generator[207725]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:50:39 compute-0 systemd-rc-local-generator[207720]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:50:40 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 23 18:50:40 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 23 18:50:40 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 23 18:50:40 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 23 18:50:40 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 23 18:50:40 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 23 18:50:40 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 23 18:50:40 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:40 compute-0 sudo[207695]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:50:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:50:40 compute-0 sudo[207911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndnisjmlddjvgqaahmkzzigrymjnjccf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194240.3788106-1064-137287323252269/AnsiballZ_systemd.py'
Jan 23 18:50:40 compute-0 sudo[207911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:41 compute-0 python3.9[207913]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 18:50:41 compute-0 systemd[1]: Reloading.
Jan 23 18:50:41 compute-0 systemd-rc-local-generator[207949]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:50:41 compute-0 systemd-sysv-generator[207952]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:50:41 compute-0 podman[207915]: 2026-01-23 18:50:41.238364524 +0000 UTC m=+0.145002574 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 23 18:50:41 compute-0 ceph-mon[75097]: pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:41 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 23 18:50:41 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 23 18:50:41 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 23 18:50:41 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 23 18:50:41 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 23 18:50:41 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 23 18:50:41 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 23 18:50:41 compute-0 sudo[207911]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:41 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 23 18:50:41 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 23 18:50:41 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 23 18:50:41 compute-0 sudo[208154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-marlsdgsyjcomycaaaqworyxjkieagch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194241.6099584-1064-226110301337786/AnsiballZ_systemd.py'
Jan 23 18:50:41 compute-0 sudo[208154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:42 compute-0 python3.9[208156]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 18:50:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:42 compute-0 systemd[1]: Reloading.
Jan 23 18:50:42 compute-0 systemd-sysv-generator[208190]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:50:42 compute-0 systemd-rc-local-generator[208187]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:50:42 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 23 18:50:42 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 23 18:50:42 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 23 18:50:42 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 23 18:50:42 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 23 18:50:42 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 23 18:50:42 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 23 18:50:42 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 23 18:50:42 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 23 18:50:42 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 23 18:50:42 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 23 18:50:42 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 23 18:50:42 compute-0 sudo[208154]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:50:42 compute-0 setroubleshoot[207974]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l ed303f1b-15e7-441f-bfbc-a9078030e4c5
Jan 23 18:50:42 compute-0 setroubleshoot[207974]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 23 18:50:42 compute-0 setroubleshoot[207974]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l ed303f1b-15e7-441f-bfbc-a9078030e4c5
Jan 23 18:50:42 compute-0 setroubleshoot[207974]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 23 18:50:43 compute-0 sudo[208373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muvvsnpaosniguktxehckewhrvbtmcbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194242.7890048-1064-220424305049573/AnsiballZ_systemd.py'
Jan 23 18:50:43 compute-0 sudo[208373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:43 compute-0 ceph-mon[75097]: pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:43 compute-0 python3.9[208375]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 18:50:43 compute-0 systemd[1]: Reloading.
Jan 23 18:50:43 compute-0 systemd-rc-local-generator[208398]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:50:43 compute-0 systemd-sysv-generator[208403]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:50:43 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 23 18:50:43 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 23 18:50:43 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 23 18:50:43 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 23 18:50:43 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 23 18:50:43 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 23 18:50:43 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 23 18:50:43 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 23 18:50:43 compute-0 sudo[208373]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:44 compute-0 sudo[208584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfdkjahbeeuowjxqkufslfohwbakjvtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194244.1002212-1101-224990246278443/AnsiballZ_file.py'
Jan 23 18:50:44 compute-0 sudo[208584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:44 compute-0 python3.9[208586]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:44 compute-0 sudo[208584]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:45 compute-0 sudo[208736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpgvtppkebtmdganmucmpseiqkkqebsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194244.7507436-1109-172757344769378/AnsiballZ_find.py'
Jan 23 18:50:45 compute-0 sudo[208736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:45 compute-0 python3.9[208738]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 18:50:45 compute-0 sudo[208736]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:45 compute-0 ceph-mon[75097]: pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:45 compute-0 sudo[208888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghagtzmpopkxvpdtbtnqdymmiwpwlrkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194245.41923-1117-179462060101317/AnsiballZ_command.py'
Jan 23 18:50:45 compute-0 sudo[208888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:45 compute-0 python3.9[208890]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:50:45 compute-0 sudo[208888]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:46 compute-0 python3.9[209044]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 18:50:47 compute-0 python3.9[209194]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:47 compute-0 ceph-mon[75097]: pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:50:47 compute-0 python3.9[209315]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769194246.8960822-1136-190157103073860/.source.xml follow=False _original_basename=secret.xml.j2 checksum=ff3b43d5f1fd9eff4a032784b955813997b3184e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:48 compute-0 sudo[209465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmtlidtpoxmbeipmmoneztrdnavgzzoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194248.0607066-1151-195837727838953/AnsiballZ_command.py'
Jan 23 18:50:48 compute-0 sudo[209465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:48 compute-0 ceph-mon[75097]: pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:48 compute-0 python3.9[209467]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 4da2adac-d096-5ead-9ec5-3108259ba9e4
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:50:48 compute-0 polkitd[43413]: Registered Authentication Agent for unix-process:209469:336454 (system bus name :1.2576 [pkttyagent --process 209469 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 23 18:50:48 compute-0 polkitd[43413]: Unregistered Authentication Agent for unix-process:209469:336454 (system bus name :1.2576, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 23 18:50:48 compute-0 polkitd[43413]: Registered Authentication Agent for unix-process:209468:336453 (system bus name :1.2577 [pkttyagent --process 209468 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 23 18:50:48 compute-0 polkitd[43413]: Unregistered Authentication Agent for unix-process:209468:336453 (system bus name :1.2577, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 23 18:50:48 compute-0 sudo[209465]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:49 compute-0 python3.9[209629]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:49 compute-0 sudo[209779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nywtylnkqikmwblmmkwiisbzabuljcmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194249.4100764-1167-38919045462211/AnsiballZ_command.py'
Jan 23 18:50:49 compute-0 sudo[209779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:49 compute-0 sudo[209779]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:50 compute-0 sudo[209945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boeorkusqtoevqbrzkwxzwsjwxdzygqw ; FSID=4da2adac-d096-5ead-9ec5-3108259ba9e4 KEY=AQDrvnNpAAAAABAAuDzh3RYW8bPoDkiVx+RkoQ== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194250.0804653-1175-218650820792235/AnsiballZ_command.py'
Jan 23 18:50:50 compute-0 sudo[209945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:50 compute-0 podman[209906]: 2026-01-23 18:50:50.394692379 +0000 UTC m=+0.054929088 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 18:50:50 compute-0 polkitd[43413]: Registered Authentication Agent for unix-process:209955:336659 (system bus name :1.2580 [pkttyagent --process 209955 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 23 18:50:50 compute-0 polkitd[43413]: Unregistered Authentication Agent for unix-process:209955:336659 (system bus name :1.2580, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 23 18:50:50 compute-0 sudo[209945]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:51 compute-0 sudo[210110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvfbxheupdqnvxucklwkhzssxqlswbjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194250.874819-1183-183363462552812/AnsiballZ_copy.py'
Jan 23 18:50:51 compute-0 sudo[210110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:51 compute-0 ceph-mon[75097]: pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:51 compute-0 python3.9[210112]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:51 compute-0 sudo[210110]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:51 compute-0 sudo[210262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbzypappjbumzmmnmrfmohmiefyjefzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194251.5379546-1191-85510340975555/AnsiballZ_stat.py'
Jan 23 18:50:51 compute-0 sudo[210262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:52 compute-0 python3.9[210264]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:52 compute-0 sudo[210262]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:52 compute-0 sudo[210385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqifljnbbovbzqfnlcpecpilzcslrthl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194251.5379546-1191-85510340975555/AnsiballZ_copy.py'
Jan 23 18:50:52 compute-0 sudo[210385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:52 compute-0 python3.9[210387]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769194251.5379546-1191-85510340975555/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:52 compute-0 sudo[210385]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:50:52 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 23 18:50:52 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.002s CPU time.
Jan 23 18:50:52 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 23 18:50:53 compute-0 sudo[210538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djkhsyydidjdagmjvswklgereamfavbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194252.9102228-1207-199673907964629/AnsiballZ_file.py'
Jan 23 18:50:53 compute-0 sudo[210538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:53 compute-0 ceph-mon[75097]: pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:53 compute-0 python3.9[210540]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:53 compute-0 sudo[210538]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:53 compute-0 sudo[210690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqngnqbqvsfhcqbcfhzzfiedsvajuuyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194253.6043756-1215-84247606979473/AnsiballZ_stat.py'
Jan 23 18:50:53 compute-0 sudo[210690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:54 compute-0 python3.9[210692]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:54 compute-0 sudo[210690]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:54 compute-0 sudo[210768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsinnecnjzeeiwohcymewkleyvvkokza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194253.6043756-1215-84247606979473/AnsiballZ_file.py'
Jan 23 18:50:54 compute-0 sudo[210768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:54 compute-0 python3.9[210770]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:54 compute-0 sudo[210768]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:55 compute-0 sudo[210920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-levkibvxryyimlgnjqbycfeihkvdeacx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194254.7515593-1227-112115439376071/AnsiballZ_stat.py'
Jan 23 18:50:55 compute-0 sudo[210920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:55 compute-0 python3.9[210922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:55 compute-0 sudo[210920]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:55 compute-0 ceph-mon[75097]: pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:55 compute-0 sudo[210998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zffuxdswvkpvbaweotbvndrrumuhossf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194254.7515593-1227-112115439376071/AnsiballZ_file.py'
Jan 23 18:50:55 compute-0 sudo[210998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:55 compute-0 python3.9[211000]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.6na5bj7v recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:55 compute-0 sudo[210998]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:56 compute-0 sudo[211150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywnejrgddaljmvizanbllexivtddkqiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194255.8990316-1239-220572501110121/AnsiballZ_stat.py'
Jan 23 18:50:56 compute-0 sudo[211150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:56 compute-0 python3.9[211152]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:56 compute-0 sudo[211150]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:56 compute-0 sudo[211228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsdebldutjwgkmdtigxjvntsxcvlibur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194255.8990316-1239-220572501110121/AnsiballZ_file.py'
Jan 23 18:50:56 compute-0 sudo[211228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:56 compute-0 python3.9[211230]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:56 compute-0 sudo[211228]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:57 compute-0 ceph-mon[75097]: pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:57 compute-0 sudo[211380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsvwvwuyrvtomdwijcmmxwitcdeawrxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194257.17773-1252-17326746863135/AnsiballZ_command.py'
Jan 23 18:50:57 compute-0 sudo[211380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:57 compute-0 python3.9[211382]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:50:57 compute-0 sudo[211380]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:50:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:58 compute-0 sudo[211533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmvrqudkcjcvexgxxicvnmfoenlrdqhb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769194257.8630762-1260-130335699011204/AnsiballZ_edpm_nftables_from_files.py'
Jan 23 18:50:58 compute-0 sudo[211533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:58 compute-0 python3[211535]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 23 18:50:58 compute-0 sudo[211533]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:58 compute-0 ceph-mon[75097]: pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:50:58 compute-0 sudo[211685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqtiwixoqcrjpchbvagsvuyozskxffnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194258.6871476-1268-37181015171653/AnsiballZ_stat.py'
Jan 23 18:50:58 compute-0 sudo[211685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:59 compute-0 python3.9[211687]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:50:59 compute-0 sudo[211685]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:59 compute-0 sudo[211763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkymraztmzxsukngemiwxwpwzgjirwwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194258.6871476-1268-37181015171653/AnsiballZ_file.py'
Jan 23 18:50:59 compute-0 sudo[211763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:50:59 compute-0 python3.9[211765]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:50:59 compute-0 sudo[211763]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:59 compute-0 sudo[211766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:50:59 compute-0 sudo[211766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:50:59 compute-0 sudo[211766]: pam_unix(sudo:session): session closed for user root
Jan 23 18:50:59 compute-0 sudo[211791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:50:59 compute-0 sudo[211791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:51:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:51:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:51:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:51:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:51:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:51:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:51:00 compute-0 sudo[211993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhueearhlxvbalcgwlryhqqhpkedmjjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194259.8164628-1280-157560131663193/AnsiballZ_stat.py'
Jan 23 18:51:00 compute-0 sudo[211993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:00 compute-0 sudo[211791]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:51:00 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:51:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:51:00 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:51:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:51:00 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:51:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:51:00 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:51:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:51:00 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:51:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:51:00 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:51:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:51:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:51:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:51:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:51:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:51:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:51:00 compute-0 sudo[212000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:51:00 compute-0 sudo[212000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:51:00 compute-0 sudo[212000]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:00 compute-0 python3.9[211999]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:51:00 compute-0 sudo[212025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:51:00 compute-0 sudo[212025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:51:00 compute-0 sudo[211993]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:00 compute-0 podman[212118]: 2026-01-23 18:51:00.656330843 +0000 UTC m=+0.052334265 container create aa8605c096785f07713f436c418bc6b1cde4e72754c76114fd9447830b8f8219 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bouman, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:51:00 compute-0 systemd[1]: Started libpod-conmon-aa8605c096785f07713f436c418bc6b1cde4e72754c76114fd9447830b8f8219.scope.
Jan 23 18:51:00 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:51:00 compute-0 podman[212118]: 2026-01-23 18:51:00.634719245 +0000 UTC m=+0.030722707 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:51:00 compute-0 podman[212118]: 2026-01-23 18:51:00.742506556 +0000 UTC m=+0.138509998 container init aa8605c096785f07713f436c418bc6b1cde4e72754c76114fd9447830b8f8219 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 18:51:00 compute-0 podman[212118]: 2026-01-23 18:51:00.752016084 +0000 UTC m=+0.148019496 container start aa8605c096785f07713f436c418bc6b1cde4e72754c76114fd9447830b8f8219 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bouman, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 23 18:51:00 compute-0 trusting_bouman[212176]: 167 167
Jan 23 18:51:00 compute-0 systemd[1]: libpod-aa8605c096785f07713f436c418bc6b1cde4e72754c76114fd9447830b8f8219.scope: Deactivated successfully.
Jan 23 18:51:00 compute-0 sudo[212213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzkkgfmnmvrngmvsdskjusjqswyoonya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194259.8164628-1280-157560131663193/AnsiballZ_copy.py'
Jan 23 18:51:00 compute-0 sudo[212213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:00 compute-0 podman[212118]: 2026-01-23 18:51:00.830297159 +0000 UTC m=+0.226300621 container attach aa8605c096785f07713f436c418bc6b1cde4e72754c76114fd9447830b8f8219 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bouman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:51:00 compute-0 podman[212118]: 2026-01-23 18:51:00.832353457 +0000 UTC m=+0.228356899 container died aa8605c096785f07713f436c418bc6b1cde4e72754c76114fd9447830b8f8219 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 23 18:51:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcb4b3a8079c26926201304170d1fc4ba18b87565d7d9eb928f808f53f8517f5-merged.mount: Deactivated successfully.
Jan 23 18:51:00 compute-0 podman[212118]: 2026-01-23 18:51:00.965864245 +0000 UTC m=+0.361867687 container remove aa8605c096785f07713f436c418bc6b1cde4e72754c76114fd9447830b8f8219 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 23 18:51:00 compute-0 systemd[1]: libpod-conmon-aa8605c096785f07713f436c418bc6b1cde4e72754c76114fd9447830b8f8219.scope: Deactivated successfully.
Jan 23 18:51:00 compute-0 python3.9[212219]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769194259.8164628-1280-157560131663193/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:51:01 compute-0 sudo[212213]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:01 compute-0 podman[212253]: 2026-01-23 18:51:01.148838686 +0000 UTC m=+0.027259133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:51:01 compute-0 podman[212253]: 2026-01-23 18:51:01.252256443 +0000 UTC m=+0.130676880 container create 28677e81a0c4b3f6290cf415eede59f40c6ec8f8c7e365b107e4e61c0a1ff5b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:51:01 compute-0 systemd[1]: Started libpod-conmon-28677e81a0c4b3f6290cf415eede59f40c6ec8f8c7e365b107e4e61c0a1ff5b6.scope.
Jan 23 18:51:01 compute-0 ceph-mon[75097]: pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:01 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0110e2d0caabae1c433da33029c79767950073431e14eae75d552ee177f9b796/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0110e2d0caabae1c433da33029c79767950073431e14eae75d552ee177f9b796/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0110e2d0caabae1c433da33029c79767950073431e14eae75d552ee177f9b796/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0110e2d0caabae1c433da33029c79767950073431e14eae75d552ee177f9b796/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0110e2d0caabae1c433da33029c79767950073431e14eae75d552ee177f9b796/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:51:01 compute-0 podman[212253]: 2026-01-23 18:51:01.344883341 +0000 UTC m=+0.223303788 container init 28677e81a0c4b3f6290cf415eede59f40c6ec8f8c7e365b107e4e61c0a1ff5b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mclaren, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 18:51:01 compute-0 podman[212253]: 2026-01-23 18:51:01.355333692 +0000 UTC m=+0.233754139 container start 28677e81a0c4b3f6290cf415eede59f40c6ec8f8c7e365b107e4e61c0a1ff5b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mclaren, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:51:01 compute-0 podman[212253]: 2026-01-23 18:51:01.359296087 +0000 UTC m=+0.237716544 container attach 28677e81a0c4b3f6290cf415eede59f40c6ec8f8c7e365b107e4e61c0a1ff5b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:51:01 compute-0 sudo[212400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifmbhwqulcwmcwkmcydekogisymiupiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194261.1646607-1295-255267856138351/AnsiballZ_stat.py'
Jan 23 18:51:01 compute-0 sudo[212400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:01 compute-0 python3.9[212402]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:51:01 compute-0 sudo[212400]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:01 compute-0 bold_mclaren[212335]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:51:01 compute-0 bold_mclaren[212335]: --> All data devices are unavailable
Jan 23 18:51:01 compute-0 systemd[1]: libpod-28677e81a0c4b3f6290cf415eede59f40c6ec8f8c7e365b107e4e61c0a1ff5b6.scope: Deactivated successfully.
Jan 23 18:51:01 compute-0 podman[212253]: 2026-01-23 18:51:01.889402691 +0000 UTC m=+0.767823138 container died 28677e81a0c4b3f6290cf415eede59f40c6ec8f8c7e365b107e4e61c0a1ff5b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 23 18:51:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0110e2d0caabae1c433da33029c79767950073431e14eae75d552ee177f9b796-merged.mount: Deactivated successfully.
Jan 23 18:51:01 compute-0 sudo[212505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obouidankdyioeowaculescdgqlpexsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194261.1646607-1295-255267856138351/AnsiballZ_file.py'
Jan 23 18:51:01 compute-0 sudo[212505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:01 compute-0 podman[212253]: 2026-01-23 18:51:01.941070929 +0000 UTC m=+0.819491356 container remove 28677e81a0c4b3f6290cf415eede59f40c6ec8f8c7e365b107e4e61c0a1ff5b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mclaren, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:51:01 compute-0 systemd[1]: libpod-conmon-28677e81a0c4b3f6290cf415eede59f40c6ec8f8c7e365b107e4e61c0a1ff5b6.scope: Deactivated successfully.
Jan 23 18:51:01 compute-0 sudo[212025]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:02 compute-0 sudo[212508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:51:02 compute-0 sudo[212508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:51:02 compute-0 sudo[212508]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:02 compute-0 sudo[212533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:51:02 compute-0 sudo[212533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:51:02 compute-0 python3.9[212507]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:51:02 compute-0 sudo[212505]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:02 compute-0 podman[212640]: 2026-01-23 18:51:02.398861142 +0000 UTC m=+0.038507793 container create 1d0aa041d3cedd656445bbc1de7e2f1579871d39b6711498f634fe22fd94c4fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_burnell, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 18:51:02 compute-0 systemd[1]: Started libpod-conmon-1d0aa041d3cedd656445bbc1de7e2f1579871d39b6711498f634fe22fd94c4fe.scope.
Jan 23 18:51:02 compute-0 podman[212640]: 2026-01-23 18:51:02.38083699 +0000 UTC m=+0.020483661 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:51:02 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:51:02 compute-0 podman[212640]: 2026-01-23 18:51:02.495064286 +0000 UTC m=+0.134710967 container init 1d0aa041d3cedd656445bbc1de7e2f1579871d39b6711498f634fe22fd94c4fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_burnell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 23 18:51:02 compute-0 podman[212640]: 2026-01-23 18:51:02.502368821 +0000 UTC m=+0.142015472 container start 1d0aa041d3cedd656445bbc1de7e2f1579871d39b6711498f634fe22fd94c4fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_burnell, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 18:51:02 compute-0 podman[212640]: 2026-01-23 18:51:02.506065779 +0000 UTC m=+0.145712430 container attach 1d0aa041d3cedd656445bbc1de7e2f1579871d39b6711498f634fe22fd94c4fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:51:02 compute-0 optimistic_burnell[212669]: 167 167
Jan 23 18:51:02 compute-0 systemd[1]: libpod-1d0aa041d3cedd656445bbc1de7e2f1579871d39b6711498f634fe22fd94c4fe.scope: Deactivated successfully.
Jan 23 18:51:02 compute-0 podman[212640]: 2026-01-23 18:51:02.509257626 +0000 UTC m=+0.148904277 container died 1d0aa041d3cedd656445bbc1de7e2f1579871d39b6711498f634fe22fd94c4fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:51:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b58bbc7ac320d351a3819bc549f8a30b2bf6df8740f2f8ec7cc9ebeb400e92d8-merged.mount: Deactivated successfully.
Jan 23 18:51:02 compute-0 podman[212640]: 2026-01-23 18:51:02.549319416 +0000 UTC m=+0.188966077 container remove 1d0aa041d3cedd656445bbc1de7e2f1579871d39b6711498f634fe22fd94c4fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_burnell, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:51:02 compute-0 systemd[1]: libpod-conmon-1d0aa041d3cedd656445bbc1de7e2f1579871d39b6711498f634fe22fd94c4fe.scope: Deactivated successfully.
Jan 23 18:51:02 compute-0 sudo[212754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtgbbzboxwvwqitkqagxqcrtiyvxcdcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194262.3096302-1307-275652116472090/AnsiballZ_stat.py'
Jan 23 18:51:02 compute-0 sudo[212754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:02 compute-0 podman[212762]: 2026-01-23 18:51:02.706737655 +0000 UTC m=+0.041834243 container create 80151000a174d4f011232b526d26c1b452919c40e7280ea033f56541358b607e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_galileo, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:51:02 compute-0 systemd[1]: Started libpod-conmon-80151000a174d4f011232b526d26c1b452919c40e7280ea033f56541358b607e.scope.
Jan 23 18:51:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:51:02 compute-0 podman[212762]: 2026-01-23 18:51:02.689033491 +0000 UTC m=+0.024130119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:51:02 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:51:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e32bafc0cecec558fc829fba102c3e8636b6bb5e6cf44aa73e3d6dc6ac879a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:51:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e32bafc0cecec558fc829fba102c3e8636b6bb5e6cf44aa73e3d6dc6ac879a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:51:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e32bafc0cecec558fc829fba102c3e8636b6bb5e6cf44aa73e3d6dc6ac879a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:51:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e32bafc0cecec558fc829fba102c3e8636b6bb5e6cf44aa73e3d6dc6ac879a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:51:02 compute-0 podman[212762]: 2026-01-23 18:51:02.806671168 +0000 UTC m=+0.141767796 container init 80151000a174d4f011232b526d26c1b452919c40e7280ea033f56541358b607e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:51:02 compute-0 podman[212762]: 2026-01-23 18:51:02.815261875 +0000 UTC m=+0.150358473 container start 80151000a174d4f011232b526d26c1b452919c40e7280ea033f56541358b607e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:51:02 compute-0 python3.9[212756]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:51:02 compute-0 podman[212762]: 2026-01-23 18:51:02.819246269 +0000 UTC m=+0.154342897 container attach 80151000a174d4f011232b526d26c1b452919c40e7280ea033f56541358b607e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_galileo, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:51:02 compute-0 sudo[212754]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:03 compute-0 sudo[212862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udskxdqozngutedlbukfcdmywvkeqgag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194262.3096302-1307-275652116472090/AnsiballZ_file.py'
Jan 23 18:51:03 compute-0 sudo[212862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:03 compute-0 reverent_galileo[212780]: {
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:     "0": [
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:         {
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "devices": [
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "/dev/loop3"
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             ],
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "lv_name": "ceph_lv0",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "lv_size": "21470642176",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "name": "ceph_lv0",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "tags": {
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.cluster_name": "ceph",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.crush_device_class": "",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.encrypted": "0",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.objectstore": "bluestore",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.osd_id": "0",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.type": "block",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.vdo": "0",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.with_tpm": "0"
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             },
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "type": "block",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "vg_name": "ceph_vg0"
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:         }
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:     ],
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:     "1": [
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:         {
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "devices": [
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "/dev/loop4"
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             ],
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "lv_name": "ceph_lv1",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "lv_size": "21470642176",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "name": "ceph_lv1",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "tags": {
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.cluster_name": "ceph",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.crush_device_class": "",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.encrypted": "0",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.objectstore": "bluestore",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.osd_id": "1",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.type": "block",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.vdo": "0",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.with_tpm": "0"
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             },
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "type": "block",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "vg_name": "ceph_vg1"
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:         }
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:     ],
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:     "2": [
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:         {
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "devices": [
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "/dev/loop5"
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             ],
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "lv_name": "ceph_lv2",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "lv_size": "21470642176",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "name": "ceph_lv2",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "tags": {
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.cluster_name": "ceph",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.crush_device_class": "",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.encrypted": "0",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.objectstore": "bluestore",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.osd_id": "2",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.type": "block",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.vdo": "0",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:                 "ceph.with_tpm": "0"
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             },
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "type": "block",
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:             "vg_name": "ceph_vg2"
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:         }
Jan 23 18:51:03 compute-0 reverent_galileo[212780]:     ]
Jan 23 18:51:03 compute-0 reverent_galileo[212780]: }
Jan 23 18:51:03 compute-0 systemd[1]: libpod-80151000a174d4f011232b526d26c1b452919c40e7280ea033f56541358b607e.scope: Deactivated successfully.
Jan 23 18:51:03 compute-0 podman[212762]: 2026-01-23 18:51:03.161761392 +0000 UTC m=+0.496858050 container died 80151000a174d4f011232b526d26c1b452919c40e7280ea033f56541358b607e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_galileo, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:51:03 compute-0 python3.9[212866]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:51:03 compute-0 sudo[212862]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:03 compute-0 ceph-mon[75097]: pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e32bafc0cecec558fc829fba102c3e8636b6bb5e6cf44aa73e3d6dc6ac879a2-merged.mount: Deactivated successfully.
Jan 23 18:51:03 compute-0 podman[212762]: 2026-01-23 18:51:03.629974364 +0000 UTC m=+0.965071002 container remove 80151000a174d4f011232b526d26c1b452919c40e7280ea033f56541358b607e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_galileo, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:51:03 compute-0 systemd[1]: libpod-conmon-80151000a174d4f011232b526d26c1b452919c40e7280ea033f56541358b607e.scope: Deactivated successfully.
Jan 23 18:51:03 compute-0 sudo[212533]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:03 compute-0 sudo[212959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:51:03 compute-0 sudo[212959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:51:03 compute-0 sudo[212959]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:03 compute-0 sudo[213007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:51:03 compute-0 sudo[213007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:51:03 compute-0 sudo[213078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sasvzjtlszzroeaoibcrrervrkpzmqcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194263.4734855-1319-80423932971410/AnsiballZ_stat.py'
Jan 23 18:51:03 compute-0 sudo[213078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:04 compute-0 python3.9[213080]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:51:04 compute-0 sudo[213078]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:04 compute-0 podman[213093]: 2026-01-23 18:51:04.125531992 +0000 UTC m=+0.087551557 container create 836b3c9ebd7d997c2b75378204bca6ce15033305070a3eac5c67eb8c774143d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 18:51:04 compute-0 systemd[1]: Started libpod-conmon-836b3c9ebd7d997c2b75378204bca6ce15033305070a3eac5c67eb8c774143d6.scope.
Jan 23 18:51:04 compute-0 podman[213093]: 2026-01-23 18:51:04.072306487 +0000 UTC m=+0.034326052 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:51:04 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:51:04 compute-0 podman[213093]: 2026-01-23 18:51:04.205281852 +0000 UTC m=+0.167301437 container init 836b3c9ebd7d997c2b75378204bca6ce15033305070a3eac5c67eb8c774143d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:51:04 compute-0 podman[213093]: 2026-01-23 18:51:04.211915281 +0000 UTC m=+0.173934856 container start 836b3c9ebd7d997c2b75378204bca6ce15033305070a3eac5c67eb8c774143d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mirzakhani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 23 18:51:04 compute-0 heuristic_mirzakhani[213119]: 167 167
Jan 23 18:51:04 compute-0 systemd[1]: libpod-836b3c9ebd7d997c2b75378204bca6ce15033305070a3eac5c67eb8c774143d6.scope: Deactivated successfully.
Jan 23 18:51:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:04 compute-0 podman[213093]: 2026-01-23 18:51:04.244780479 +0000 UTC m=+0.206800064 container attach 836b3c9ebd7d997c2b75378204bca6ce15033305070a3eac5c67eb8c774143d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mirzakhani, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:51:04 compute-0 podman[213093]: 2026-01-23 18:51:04.245336491 +0000 UTC m=+0.207356056 container died 836b3c9ebd7d997c2b75378204bca6ce15033305070a3eac5c67eb8c774143d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:51:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-82b5066c3f7a2f90e5e011e37b2ff2650cccd9d5a237f904c17bc522cab38c3a-merged.mount: Deactivated successfully.
Jan 23 18:51:04 compute-0 podman[213093]: 2026-01-23 18:51:04.302669314 +0000 UTC m=+0.264688919 container remove 836b3c9ebd7d997c2b75378204bca6ce15033305070a3eac5c67eb8c774143d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 18:51:04 compute-0 systemd[1]: libpod-conmon-836b3c9ebd7d997c2b75378204bca6ce15033305070a3eac5c67eb8c774143d6.scope: Deactivated successfully.
Jan 23 18:51:04 compute-0 sudo[213264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgfczqemhniwnngtxawojryoqpyplxcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194263.4734855-1319-80423932971410/AnsiballZ_copy.py'
Jan 23 18:51:04 compute-0 sudo[213264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:04 compute-0 podman[213247]: 2026-01-23 18:51:04.524739252 +0000 UTC m=+0.050706525 container create 125960eef2e76348037cd2afd041e6ae77260a50bdccd4d637b47d345fb9a176 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_gauss, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 23 18:51:04 compute-0 systemd[1]: Started libpod-conmon-125960eef2e76348037cd2afd041e6ae77260a50bdccd4d637b47d345fb9a176.scope.
Jan 23 18:51:04 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:51:04 compute-0 ceph-mon[75097]: pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0fe075982640d4b1e547cbb0abaf0de4f12bf37f1cfa1c49d9936f02ea065d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0fe075982640d4b1e547cbb0abaf0de4f12bf37f1cfa1c49d9936f02ea065d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0fe075982640d4b1e547cbb0abaf0de4f12bf37f1cfa1c49d9936f02ea065d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0fe075982640d4b1e547cbb0abaf0de4f12bf37f1cfa1c49d9936f02ea065d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:51:04 compute-0 podman[213247]: 2026-01-23 18:51:04.601120142 +0000 UTC m=+0.127087435 container init 125960eef2e76348037cd2afd041e6ae77260a50bdccd4d637b47d345fb9a176 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 23 18:51:04 compute-0 podman[213247]: 2026-01-23 18:51:04.508528884 +0000 UTC m=+0.034496177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:51:04 compute-0 podman[213247]: 2026-01-23 18:51:04.611409408 +0000 UTC m=+0.137376681 container start 125960eef2e76348037cd2afd041e6ae77260a50bdccd4d637b47d345fb9a176 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_gauss, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:51:04 compute-0 podman[213247]: 2026-01-23 18:51:04.615415924 +0000 UTC m=+0.141383207 container attach 125960eef2e76348037cd2afd041e6ae77260a50bdccd4d637b47d345fb9a176 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_gauss, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:51:04 compute-0 python3.9[213270]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769194263.4734855-1319-80423932971410/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:51:04 compute-0 sudo[213264]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:05 compute-0 sudo[213478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xozpbgwcaxgoyazkjjcfjmvupbnrptfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194264.8759558-1334-58197829624659/AnsiballZ_file.py'
Jan 23 18:51:05 compute-0 sudo[213478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:05 compute-0 lvm[213506]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:51:05 compute-0 lvm[213506]: VG ceph_vg0 finished
Jan 23 18:51:05 compute-0 lvm[213508]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:51:05 compute-0 lvm[213508]: VG ceph_vg1 finished
Jan 23 18:51:05 compute-0 python3.9[213485]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:51:05 compute-0 lvm[213510]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:51:05 compute-0 lvm[213510]: VG ceph_vg2 finished
Jan 23 18:51:05 compute-0 sudo[213478]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:05 compute-0 lucid_gauss[213276]: {}
Jan 23 18:51:05 compute-0 systemd[1]: libpod-125960eef2e76348037cd2afd041e6ae77260a50bdccd4d637b47d345fb9a176.scope: Deactivated successfully.
Jan 23 18:51:05 compute-0 systemd[1]: libpod-125960eef2e76348037cd2afd041e6ae77260a50bdccd4d637b47d345fb9a176.scope: Consumed 1.538s CPU time.
Jan 23 18:51:05 compute-0 podman[213247]: 2026-01-23 18:51:05.514864734 +0000 UTC m=+1.040832037 container died 125960eef2e76348037cd2afd041e6ae77260a50bdccd4d637b47d345fb9a176 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:51:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0fe075982640d4b1e547cbb0abaf0de4f12bf37f1cfa1c49d9936f02ea065d5-merged.mount: Deactivated successfully.
Jan 23 18:51:05 compute-0 podman[213247]: 2026-01-23 18:51:05.565674651 +0000 UTC m=+1.091641944 container remove 125960eef2e76348037cd2afd041e6ae77260a50bdccd4d637b47d345fb9a176 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_gauss, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Jan 23 18:51:05 compute-0 systemd[1]: libpod-conmon-125960eef2e76348037cd2afd041e6ae77260a50bdccd4d637b47d345fb9a176.scope: Deactivated successfully.
Jan 23 18:51:05 compute-0 sudo[213007]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:51:05 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:51:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:51:05 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:51:05 compute-0 sudo[213601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:51:05 compute-0 sudo[213601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:51:05 compute-0 sudo[213601]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:05 compute-0 sudo[213699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhzaxhmithptjtvmunqlqqpjaesfqdtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194265.5780747-1342-35388923658147/AnsiballZ_command.py'
Jan 23 18:51:05 compute-0 sudo[213699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:06 compute-0 python3.9[213701]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:51:06 compute-0 sudo[213699]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:51:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:51:06 compute-0 ceph-mon[75097]: pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:06 compute-0 sudo[213854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrozdajmwszhdhkuldfliazrymuwyrsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194266.3596659-1350-22425721090096/AnsiballZ_blockinfile.py'
Jan 23 18:51:06 compute-0 sudo[213854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:06 compute-0 python3.9[213856]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:51:06 compute-0 sudo[213854]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:07 compute-0 sudo[214006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djnthzfnofpcjltferafqxgdfpfioxtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194267.246003-1359-261807986145145/AnsiballZ_command.py'
Jan 23 18:51:07 compute-0 sudo[214006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:07 compute-0 python3.9[214008]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:51:07 compute-0 sudo[214006]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:51:08 compute-0 sudo[214159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghtlnfdxcmgbbkvgfnjygrqejiyqbuhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194267.912571-1367-121628171400252/AnsiballZ_stat.py'
Jan 23 18:51:08 compute-0 sudo[214159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:08 compute-0 python3.9[214161]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:51:08 compute-0 sudo[214159]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:08 compute-0 sudo[214313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foeyyjhmdeocbrdusftjoujkeppeuxop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194268.6134603-1375-141811477070218/AnsiballZ_command.py'
Jan 23 18:51:08 compute-0 sudo[214313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:09 compute-0 python3.9[214315]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:51:09 compute-0 sudo[214313]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:09 compute-0 ceph-mon[75097]: pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:09 compute-0 sudo[214468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuydvrklmygeycefwrlqurqbgxcilntj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194269.3473105-1383-256015755408562/AnsiballZ_file.py'
Jan 23 18:51:09 compute-0 sudo[214468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:09 compute-0 python3.9[214470]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:51:09 compute-0 sudo[214468]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:10 compute-0 sudo[214620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfmmpdevbwjkuwyqjixjndbvahqmvhai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194269.951033-1391-45216352074998/AnsiballZ_stat.py'
Jan 23 18:51:10 compute-0 sudo[214620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:10 compute-0 python3.9[214622]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:51:10 compute-0 sudo[214620]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:10 compute-0 sudo[214743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvardjkwjyadmqypsuyxevqsnqzolgjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194269.951033-1391-45216352074998/AnsiballZ_copy.py'
Jan 23 18:51:10 compute-0 sudo[214743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:10 compute-0 python3.9[214745]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769194269.951033-1391-45216352074998/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:51:10 compute-0 sudo[214743]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:11 compute-0 ceph-mon[75097]: pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:11 compute-0 sudo[214907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-merzsevabjuadisdxlqlnrcolobayxle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194271.1509612-1406-259028785260397/AnsiballZ_stat.py'
Jan 23 18:51:11 compute-0 sudo[214907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:11 compute-0 podman[214869]: 2026-01-23 18:51:11.502159137 +0000 UTC m=+0.087818905 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 18:51:11 compute-0 python3.9[214916]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:51:11 compute-0 sudo[214907]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:12 compute-0 sudo[215044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-istzhcabamrutibyzccznrtkkdycuqtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194271.1509612-1406-259028785260397/AnsiballZ_copy.py'
Jan 23 18:51:12 compute-0 sudo[215044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:12 compute-0 python3.9[215046]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769194271.1509612-1406-259028785260397/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:51:12 compute-0 sudo[215044]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:12 compute-0 sudo[215196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjkoyczhxjqxvxkdnhwpnenyarbscdko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194272.4534786-1421-230937362766091/AnsiballZ_stat.py'
Jan 23 18:51:12 compute-0 sudo[215196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:51:12 compute-0 python3.9[215198]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:51:12 compute-0 sudo[215196]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:13 compute-0 ceph-mon[75097]: pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:13 compute-0 sudo[215319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybyntmqupstsswsoifnjsseileoemwpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194272.4534786-1421-230937362766091/AnsiballZ_copy.py'
Jan 23 18:51:13 compute-0 sudo[215319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:13 compute-0 python3.9[215321]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769194272.4534786-1421-230937362766091/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:51:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:51:13.570 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:51:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:51:13.571 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:51:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:51:13.572 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:51:13 compute-0 sudo[215319]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:14 compute-0 sudo[215471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdxhbzdjqdphpfpjhfqcpobnnghsltbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194273.7524586-1436-100280027816346/AnsiballZ_systemd.py'
Jan 23 18:51:14 compute-0 sudo[215471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:14 compute-0 python3.9[215473]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:51:14 compute-0 systemd[1]: Reloading.
Jan 23 18:51:14 compute-0 systemd-rc-local-generator[215495]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:51:14 compute-0 systemd-sysv-generator[215502]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:51:14 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 23 18:51:14 compute-0 sudo[215471]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:15 compute-0 sudo[215662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrmdhpynxbewvqwauaakeruepgqopydh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194274.9080615-1444-152631780600294/AnsiballZ_systemd.py'
Jan 23 18:51:15 compute-0 sudo[215662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:15 compute-0 python3.9[215664]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 23 18:51:15 compute-0 systemd[1]: Reloading.
Jan 23 18:51:15 compute-0 systemd-rc-local-generator[215690]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:51:15 compute-0 systemd-sysv-generator[215694]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:51:15 compute-0 ceph-mon[75097]: pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:15 compute-0 systemd[1]: Reloading.
Jan 23 18:51:15 compute-0 systemd-rc-local-generator[215724]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:51:15 compute-0 systemd-sysv-generator[215731]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:51:16 compute-0 sudo[215662]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:16 compute-0 sshd-session[156165]: Connection closed by 192.168.122.30 port 34690
Jan 23 18:51:16 compute-0 sshd-session[156162]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:51:16 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Jan 23 18:51:16 compute-0 systemd[1]: session-48.scope: Consumed 3min 33.773s CPU time.
Jan 23 18:51:16 compute-0 systemd-logind[792]: Session 48 logged out. Waiting for processes to exit.
Jan 23 18:51:16 compute-0 systemd-logind[792]: Removed session 48.
Jan 23 18:51:16 compute-0 ceph-mon[75097]: pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:51:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:19 compute-0 ceph-mon[75097]: pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:20 compute-0 podman[215760]: 2026-01-23 18:51:20.97699731 +0000 UTC m=+0.059986698 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 18:51:21 compute-0 sshd-session[215779]: Accepted publickey for zuul from 192.168.122.30 port 59642 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:51:21 compute-0 systemd-logind[792]: New session 49 of user zuul.
Jan 23 18:51:21 compute-0 systemd[1]: Started Session 49 of User zuul.
Jan 23 18:51:21 compute-0 sshd-session[215779]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:51:21 compute-0 ceph-mon[75097]: pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:51:22 compute-0 python3.9[215932]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:51:22 compute-0 ceph-mon[75097]: pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:24 compute-0 python3.9[216086]: ansible-ansible.builtin.service_facts Invoked
Jan 23 18:51:24 compute-0 network[216103]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 18:51:24 compute-0 network[216104]: 'network-scripts' will be removed from distribution in near future.
Jan 23 18:51:24 compute-0 network[216105]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 18:51:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:25 compute-0 ceph-mon[75097]: pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:27 compute-0 ceph-mon[75097]: pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:51:27 compute-0 sudo[216375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfwngkxlhwugeejynpfjwirswwndwwkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194287.5727024-42-256138363626774/AnsiballZ_setup.py'
Jan 23 18:51:27 compute-0 sudo[216375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:28 compute-0 python3.9[216377]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 18:51:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:28 compute-0 sudo[216375]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:51:28
Jan 23 18:51:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:51:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:51:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.rgw.root', 'backups', 'default.rgw.control']
Jan 23 18:51:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:51:28 compute-0 sudo[216459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgdqtnwpwuvphrwterpibkduyezbnxtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194287.5727024-42-256138363626774/AnsiballZ_dnf.py'
Jan 23 18:51:28 compute-0 sudo[216459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:29 compute-0 python3.9[216461]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:51:29 compute-0 ceph-mon[75097]: pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:51:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:51:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:51:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:51:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:51:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:51:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:51:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:51:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:51:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:51:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:51:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:51:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:51:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:51:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:51:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:51:31 compute-0 ceph-mon[75097]: pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:51:33 compute-0 ceph-mon[75097]: pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Jan 23 18:51:34 compute-0 sudo[216459]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:35 compute-0 ceph-mon[75097]: pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Jan 23 18:51:35 compute-0 sudo[216612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szxxsggmtnuqdnpqipjjfbqemagjslrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194294.9070137-54-53511128041023/AnsiballZ_stat.py'
Jan 23 18:51:35 compute-0 sudo[216612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:35 compute-0 python3.9[216614]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:51:35 compute-0 sudo[216612]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 23 18:51:36 compute-0 sudo[216764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjhojrfniqpikbulvtjsedjqweizvipn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194295.8259273-64-107488654720595/AnsiballZ_command.py'
Jan 23 18:51:36 compute-0 sudo[216764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:36 compute-0 python3.9[216766]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:51:36 compute-0 ceph-mon[75097]: pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 23 18:51:36 compute-0 sudo[216764]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:37 compute-0 sudo[216917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdlrnzaboqzkssagadbszvbfcjrebemg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194296.8367474-74-147639883281709/AnsiballZ_stat.py'
Jan 23 18:51:37 compute-0 sudo[216917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:37 compute-0 python3.9[216919]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:51:37 compute-0 sudo[216917]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:51:37 compute-0 sudo[217069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edttoygxmcusiewpuetefcbkegbvfggb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194297.4863183-82-203245331025575/AnsiballZ_command.py'
Jan 23 18:51:37 compute-0 sudo[217069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:37 compute-0 python3.9[217071]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:51:37 compute-0 sudo[217069]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 23 18:51:38 compute-0 sudo[217222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idddrngssbcmmjnfaxgyfiefxyavgknq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194298.1440787-90-254238982597517/AnsiballZ_stat.py'
Jan 23 18:51:38 compute-0 sudo[217222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:38 compute-0 ceph-mon[75097]: pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 23 18:51:38 compute-0 python3.9[217224]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:51:38 compute-0 sudo[217222]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:39 compute-0 sudo[217345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmcshjnxmumxiifvnhsxutulannhkoze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194298.1440787-90-254238982597517/AnsiballZ_copy.py'
Jan 23 18:51:39 compute-0 sudo[217345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:39 compute-0 python3.9[217347]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769194298.1440787-90-254238982597517/.source.iscsi _original_basename=.4nbr0x8t follow=False checksum=9b5a42db6129791d8d780eb7928b7aebf8fd54f5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:51:39 compute-0 sudo[217345]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:39 compute-0 sudo[217497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgaduejsylrjpgtrgbnrykjifgmweaoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194299.4216287-105-186975637127101/AnsiballZ_file.py'
Jan 23 18:51:39 compute-0 sudo[217497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:40 compute-0 python3.9[217499]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:51:40 compute-0 sudo[217497]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:51:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:51:40 compute-0 sudo[217649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efrbvqmtkoxfsjqrnvmfepqjtcvzlbbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194300.249444-113-88378165355489/AnsiballZ_lineinfile.py'
Jan 23 18:51:40 compute-0 sudo[217649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:40 compute-0 python3.9[217651]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:51:40 compute-0 sudo[217649]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:41 compute-0 ceph-mon[75097]: pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 23 18:51:41 compute-0 sudo[217811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sherkicuqlwyexlycriowjchrrcayxkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194301.0416217-122-12067340609677/AnsiballZ_systemd_service.py'
Jan 23 18:51:41 compute-0 sudo[217811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:41 compute-0 podman[217775]: 2026-01-23 18:51:41.706629797 +0000 UTC m=+0.091358564 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 18:51:41 compute-0 python3.9[217821]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:51:41 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 23 18:51:42 compute-0 sudo[217811]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 23 18:51:42 compute-0 ceph-mon[75097]: pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 23 18:51:42 compute-0 sudo[217982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oypddawsjhloohihfaqqregzhohqeprl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194302.2002296-130-44672336437313/AnsiballZ_systemd_service.py'
Jan 23 18:51:42 compute-0 sudo[217982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:51:42 compute-0 python3.9[217984]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:51:42 compute-0 systemd[1]: Reloading.
Jan 23 18:51:42 compute-0 systemd-rc-local-generator[218013]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:51:43 compute-0 systemd-sysv-generator[218017]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:51:43 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 23 18:51:43 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 23 18:51:43 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 23 18:51:43 compute-0 systemd[1]: Started Open-iSCSI.
Jan 23 18:51:43 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 23 18:51:43 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 23 18:51:43 compute-0 sudo[217982]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 23 18:51:44 compute-0 python3.9[218182]: ansible-ansible.builtin.service_facts Invoked
Jan 23 18:51:44 compute-0 network[218199]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 18:51:44 compute-0 network[218200]: 'network-scripts' will be removed from distribution in near future.
Jan 23 18:51:44 compute-0 network[218201]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 18:51:45 compute-0 ceph-mon[75097]: pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 23 18:51:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Jan 23 18:51:46 compute-0 ceph-mon[75097]: pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Jan 23 18:51:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:51:47 compute-0 sudo[218471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdlitbwbbtotujprgzhulogwqcjhtird ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194307.6641166-153-50646012997112/AnsiballZ_dnf.py'
Jan 23 18:51:47 compute-0 sudo[218471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:48 compute-0 python3.9[218473]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:51:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:48 compute-0 ceph-mon[75097]: pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:50 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 18:51:50 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 18:51:50 compute-0 systemd[1]: Reloading.
Jan 23 18:51:50 compute-0 systemd-rc-local-generator[218514]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:51:50 compute-0 systemd-sysv-generator[218520]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:51:50 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 18:51:51 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 18:51:51 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 18:51:51 compute-0 systemd[1]: run-r5bad253d59074fc9a6b7131f31e62f85.service: Deactivated successfully.
Jan 23 18:51:51 compute-0 podman[218637]: 2026-01-23 18:51:51.263839456 +0000 UTC m=+0.096781619 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 23 18:51:51 compute-0 ceph-mon[75097]: pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:51 compute-0 sudo[218471]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:52 compute-0 sudo[218805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hizxeaudnanwhfmoxqavhwbeeekpuqib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194311.647677-162-120132363233099/AnsiballZ_file.py'
Jan 23 18:51:52 compute-0 sudo[218805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:52 compute-0 python3.9[218807]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 23 18:51:52 compute-0 sudo[218805]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:51:52 compute-0 sudo[218957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrquriatjarnpcdmladcoaijlpbjeqnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194312.434031-170-68229673971943/AnsiballZ_modprobe.py'
Jan 23 18:51:52 compute-0 sudo[218957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:53 compute-0 python3.9[218959]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 23 18:51:53 compute-0 sudo[218957]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:53 compute-0 ceph-mon[75097]: pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:53 compute-0 sudo[219113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdrmuwxowivhotqulqzwcxddndfzkdcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194313.3201313-178-266409367172552/AnsiballZ_stat.py'
Jan 23 18:51:53 compute-0 sudo[219113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:53 compute-0 python3.9[219115]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:51:53 compute-0 sudo[219113]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:54 compute-0 sudo[219236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqrbpjprxvmnrivsxvliwznivytltvpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194313.3201313-178-266409367172552/AnsiballZ_copy.py'
Jan 23 18:51:54 compute-0 sudo[219236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:54 compute-0 python3.9[219238]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769194313.3201313-178-266409367172552/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:51:54 compute-0 sudo[219236]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:54 compute-0 sudo[219388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcuycessbkgliqoirdoozdlddjcydexh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194314.5846324-194-8200262230528/AnsiballZ_lineinfile.py'
Jan 23 18:51:54 compute-0 sudo[219388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:55 compute-0 python3.9[219390]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:51:55 compute-0 sudo[219388]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:55 compute-0 ceph-mon[75097]: pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:56 compute-0 sudo[219540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rszgsarvoqwzigmcfdxjbhtzxavecvnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194315.3361979-202-227515210423805/AnsiballZ_systemd.py'
Jan 23 18:51:56 compute-0 sudo[219540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:56 compute-0 python3.9[219542]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 18:51:56 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 23 18:51:56 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 23 18:51:56 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 23 18:51:56 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 23 18:51:56 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 23 18:51:56 compute-0 sudo[219540]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:56 compute-0 sudo[219696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqhjewromnheljrwcyfkonodltgkxjfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194316.6983416-210-187320229792318/AnsiballZ_command.py'
Jan 23 18:51:56 compute-0 sudo[219696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:57 compute-0 python3.9[219698]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:51:57 compute-0 sudo[219696]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:57 compute-0 ceph-mon[75097]: pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:51:57 compute-0 sudo[219849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoxtqgxchwqaqkppckyjpxqtoikkrffe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194317.4888797-220-16172144325189/AnsiballZ_stat.py'
Jan 23 18:51:57 compute-0 sudo[219849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:57 compute-0 python3.9[219851]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:51:58 compute-0 sudo[219849]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:58 compute-0 sudo[220001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpymiytqlkqcaukjdjqjazqpkhosxpyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194318.2406814-229-226788846924436/AnsiballZ_stat.py'
Jan 23 18:51:58 compute-0 sudo[220001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:58 compute-0 python3.9[220003]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:51:58 compute-0 sudo[220001]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:59 compute-0 sudo[220124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slwojhwqwwwrvtktwekfuffecnfilbfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194318.2406814-229-226788846924436/AnsiballZ_copy.py'
Jan 23 18:51:59 compute-0 sudo[220124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:59 compute-0 python3.9[220126]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769194318.2406814-229-226788846924436/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:51:59 compute-0 ceph-mon[75097]: pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:51:59 compute-0 sudo[220124]: pam_unix(sudo:session): session closed for user root
Jan 23 18:51:59 compute-0 sudo[220276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxgoysoazlbeknqvckgsjvjunipitccf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194319.51681-244-55965778622425/AnsiballZ_command.py'
Jan 23 18:51:59 compute-0 sudo[220276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:51:59 compute-0 python3.9[220278]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:52:00 compute-0 sudo[220276]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:52:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:52:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:52:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:52:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:52:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:52:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:00 compute-0 sudo[220429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqrlualuwleguiraeptdghwwdzkrivlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194320.1877706-252-160288688933960/AnsiballZ_lineinfile.py'
Jan 23 18:52:00 compute-0 sudo[220429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:00 compute-0 python3.9[220431]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:00 compute-0 sudo[220429]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:01 compute-0 sudo[220581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejmiwyhnnsqrehtcxsmazyrzalvsjljx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194320.8213544-260-184529749902645/AnsiballZ_replace.py'
Jan 23 18:52:01 compute-0 sudo[220581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:01 compute-0 ceph-mon[75097]: pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:01 compute-0 python3.9[220583]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:01 compute-0 sudo[220581]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:01 compute-0 sudo[220733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dulhmevtachqfqnrsrqqhwkzvaihdnaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194321.6509335-268-18314175854842/AnsiballZ_replace.py'
Jan 23 18:52:01 compute-0 sudo[220733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:02 compute-0 python3.9[220735]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:02 compute-0 sudo[220733]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:02 compute-0 sudo[220885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqdcqadijczkazioxtbqjoijxhdxsmjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194322.3427215-277-222309110784913/AnsiballZ_lineinfile.py'
Jan 23 18:52:02 compute-0 sudo[220885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:52:02 compute-0 python3.9[220887]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:02 compute-0 sudo[220885]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:03 compute-0 sudo[221037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubsfklnreoqpljghtuovchxvxjdboxdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194322.932891-277-167504804528105/AnsiballZ_lineinfile.py'
Jan 23 18:52:03 compute-0 sudo[221037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:03 compute-0 ceph-mon[75097]: pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:03 compute-0 python3.9[221039]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:03 compute-0 sudo[221037]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:03 compute-0 sudo[221189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzzzwpikgyokwqggulumenwtspuifhto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194323.5418968-277-5766362009874/AnsiballZ_lineinfile.py'
Jan 23 18:52:03 compute-0 sudo[221189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:03 compute-0 python3.9[221191]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:03 compute-0 sudo[221189]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:04 compute-0 sudo[221341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjcbdxsnzgbboqucvwdspmkxcypwmgcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194324.116815-277-65546069939998/AnsiballZ_lineinfile.py'
Jan 23 18:52:04 compute-0 sudo[221341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:04 compute-0 python3.9[221343]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:04 compute-0 sudo[221341]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:05 compute-0 sudo[221493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obonysdfwvgsyzrtekeutdgwnciinjbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194324.7454724-306-79934851502614/AnsiballZ_stat.py'
Jan 23 18:52:05 compute-0 sudo[221493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:05 compute-0 python3.9[221495]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:52:05 compute-0 sudo[221493]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:05 compute-0 ceph-mon[75097]: pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:05 compute-0 sudo[221647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eplqzermkpzaparqfcgjtwqyjnfcnyfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194325.4174802-314-232002351476112/AnsiballZ_command.py'
Jan 23 18:52:05 compute-0 sudo[221647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:05 compute-0 sudo[221650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:52:05 compute-0 sudo[221650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:52:05 compute-0 sudo[221650]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:05 compute-0 sudo[221675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:52:05 compute-0 sudo[221675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:52:05 compute-0 python3.9[221649]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:52:05 compute-0 sudo[221647]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:06 compute-0 sudo[221675]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:06 compute-0 sudo[221882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgbgfyzqqavugxzioyokqdbpfssfpsrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194326.0478146-323-4606514899562/AnsiballZ_systemd_service.py'
Jan 23 18:52:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:52:06 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:52:06 compute-0 sudo[221882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:52:06 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:52:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:52:06 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:52:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:52:06 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:52:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:52:06 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:52:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:52:06 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:52:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:52:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:52:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:52:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:52:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:52:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:52:06 compute-0 sudo[221885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:52:06 compute-0 sudo[221885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:52:06 compute-0 sudo[221885]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:06 compute-0 sudo[221910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:52:06 compute-0 sudo[221910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:52:06 compute-0 python3.9[221884]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:52:06 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 23 18:52:06 compute-0 sudo[221882]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:06 compute-0 podman[221948]: 2026-01-23 18:52:06.786845427 +0000 UTC m=+0.044444901 container create 2a773b9d572f51679dd4a559a7c2c2a7dad2f589c42ef572468b135cb6b253a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:52:06 compute-0 systemd[1]: Started libpod-conmon-2a773b9d572f51679dd4a559a7c2c2a7dad2f589c42ef572468b135cb6b253a5.scope.
Jan 23 18:52:06 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:52:06 compute-0 podman[221948]: 2026-01-23 18:52:06.765259871 +0000 UTC m=+0.022859365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:52:06 compute-0 podman[221948]: 2026-01-23 18:52:06.869935985 +0000 UTC m=+0.127535479 container init 2a773b9d572f51679dd4a559a7c2c2a7dad2f589c42ef572468b135cb6b253a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_greider, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 23 18:52:06 compute-0 podman[221948]: 2026-01-23 18:52:06.878937046 +0000 UTC m=+0.136536520 container start 2a773b9d572f51679dd4a559a7c2c2a7dad2f589c42ef572468b135cb6b253a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_greider, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:52:06 compute-0 podman[221948]: 2026-01-23 18:52:06.882390019 +0000 UTC m=+0.139989493 container attach 2a773b9d572f51679dd4a559a7c2c2a7dad2f589c42ef572468b135cb6b253a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_greider, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:52:06 compute-0 great_greider[221986]: 167 167
Jan 23 18:52:06 compute-0 systemd[1]: libpod-2a773b9d572f51679dd4a559a7c2c2a7dad2f589c42ef572468b135cb6b253a5.scope: Deactivated successfully.
Jan 23 18:52:06 compute-0 podman[221948]: 2026-01-23 18:52:06.88624057 +0000 UTC m=+0.143840044 container died 2a773b9d572f51679dd4a559a7c2c2a7dad2f589c42ef572468b135cb6b253a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 18:52:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-a15ff9b0b358e7a883eef1b028c335ccdf92713af0c4b1aa8e75c14bd3bbd8ae-merged.mount: Deactivated successfully.
Jan 23 18:52:06 compute-0 podman[221948]: 2026-01-23 18:52:06.927463552 +0000 UTC m=+0.185063026 container remove 2a773b9d572f51679dd4a559a7c2c2a7dad2f589c42ef572468b135cb6b253a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_greider, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 18:52:06 compute-0 systemd[1]: libpod-conmon-2a773b9d572f51679dd4a559a7c2c2a7dad2f589c42ef572468b135cb6b253a5.scope: Deactivated successfully.
Jan 23 18:52:07 compute-0 podman[222089]: 2026-01-23 18:52:07.1069188 +0000 UTC m=+0.047903485 container create 1dc7133aa88b8201b0ad798310d8a263289eec0282e07a71fecc390f30ffa5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 23 18:52:07 compute-0 systemd[1]: Started libpod-conmon-1dc7133aa88b8201b0ad798310d8a263289eec0282e07a71fecc390f30ffa5e1.scope.
Jan 23 18:52:07 compute-0 podman[222089]: 2026-01-23 18:52:07.086706782 +0000 UTC m=+0.027691497 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:52:07 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:52:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb028ab33306571f13739fa34af2440487b6553897cc52d6b9aea11920382f86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:52:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb028ab33306571f13739fa34af2440487b6553897cc52d6b9aea11920382f86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:52:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb028ab33306571f13739fa34af2440487b6553897cc52d6b9aea11920382f86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:52:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb028ab33306571f13739fa34af2440487b6553897cc52d6b9aea11920382f86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:52:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb028ab33306571f13739fa34af2440487b6553897cc52d6b9aea11920382f86/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:52:07 compute-0 podman[222089]: 2026-01-23 18:52:07.205089847 +0000 UTC m=+0.146074562 container init 1dc7133aa88b8201b0ad798310d8a263289eec0282e07a71fecc390f30ffa5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_cannon, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:52:07 compute-0 podman[222089]: 2026-01-23 18:52:07.217196912 +0000 UTC m=+0.158181597 container start 1dc7133aa88b8201b0ad798310d8a263289eec0282e07a71fecc390f30ffa5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_cannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:52:07 compute-0 sudo[222160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsnazevqpmqyyprqvtyizxoeesrxsaop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194326.935011-331-42999842463394/AnsiballZ_systemd_service.py'
Jan 23 18:52:07 compute-0 podman[222089]: 2026-01-23 18:52:07.220678347 +0000 UTC m=+0.161663052 container attach 1dc7133aa88b8201b0ad798310d8a263289eec0282e07a71fecc390f30ffa5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_cannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:52:07 compute-0 sudo[222160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:07 compute-0 ceph-mon[75097]: pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:07 compute-0 python3.9[222164]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:52:07 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 23 18:52:07 compute-0 udevadm[222179]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 23 18:52:07 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 23 18:52:07 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 23 18:52:07 compute-0 multipathd[222184]: --------start up--------
Jan 23 18:52:07 compute-0 multipathd[222184]: read /etc/multipath.conf
Jan 23 18:52:07 compute-0 multipathd[222184]: path checkers start up
Jan 23 18:52:07 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 23 18:52:07 compute-0 zen_cannon[222131]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:52:07 compute-0 zen_cannon[222131]: --> All data devices are unavailable
Jan 23 18:52:07 compute-0 sudo[222160]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:07 compute-0 podman[222089]: 2026-01-23 18:52:07.706376843 +0000 UTC m=+0.647361528 container died 1dc7133aa88b8201b0ad798310d8a263289eec0282e07a71fecc390f30ffa5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 23 18:52:07 compute-0 systemd[1]: libpod-1dc7133aa88b8201b0ad798310d8a263289eec0282e07a71fecc390f30ffa5e1.scope: Deactivated successfully.
Jan 23 18:52:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb028ab33306571f13739fa34af2440487b6553897cc52d6b9aea11920382f86-merged.mount: Deactivated successfully.
Jan 23 18:52:07 compute-0 podman[222089]: 2026-01-23 18:52:07.756617906 +0000 UTC m=+0.697602611 container remove 1dc7133aa88b8201b0ad798310d8a263289eec0282e07a71fecc390f30ffa5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_cannon, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:52:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:52:07 compute-0 systemd[1]: libpod-conmon-1dc7133aa88b8201b0ad798310d8a263289eec0282e07a71fecc390f30ffa5e1.scope: Deactivated successfully.
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:52:07.773971) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194327774015, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1594, "num_deletes": 250, "total_data_size": 2642396, "memory_usage": 2673320, "flush_reason": "Manual Compaction"}
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194327785645, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1497827, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11827, "largest_seqno": 13420, "table_properties": {"data_size": 1492443, "index_size": 2588, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13446, "raw_average_key_size": 20, "raw_value_size": 1480686, "raw_average_value_size": 2213, "num_data_blocks": 119, "num_entries": 669, "num_filter_entries": 669, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769194150, "oldest_key_time": 1769194150, "file_creation_time": 1769194327, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 11721 microseconds, and 4793 cpu microseconds.
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:52:07.785693) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1497827 bytes OK
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:52:07.785712) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:52:07.787279) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:52:07.787293) EVENT_LOG_v1 {"time_micros": 1769194327787289, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:52:07.787311) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2635524, prev total WAL file size 2635524, number of live WAL files 2.
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:52:07.788107) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1462KB)], [29(8084KB)]
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194327788210, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9776010, "oldest_snapshot_seqno": -1}
Jan 23 18:52:07 compute-0 sudo[221910]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3998 keys, 7539852 bytes, temperature: kUnknown
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194327847025, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7539852, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7511388, "index_size": 17346, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95454, "raw_average_key_size": 23, "raw_value_size": 7437581, "raw_average_value_size": 1860, "num_data_blocks": 754, "num_entries": 3998, "num_filter_entries": 3998, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 1769194327, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:52:07.847263) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7539852 bytes
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:52:07.848677) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.0 rd, 128.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.9 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(11.6) write-amplify(5.0) OK, records in: 4427, records dropped: 429 output_compression: NoCompression
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:52:07.848698) EVENT_LOG_v1 {"time_micros": 1769194327848688, "job": 12, "event": "compaction_finished", "compaction_time_micros": 58887, "compaction_time_cpu_micros": 20849, "output_level": 6, "num_output_files": 1, "total_output_size": 7539852, "num_input_records": 4427, "num_output_records": 3998, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194327849153, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194327850925, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:52:07.787944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:52:07.851013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:52:07.851019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:52:07.851021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:52:07.851023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:52:07 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:52:07.851024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:52:07 compute-0 sudo[222230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:52:07 compute-0 sudo[222230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:52:07 compute-0 sudo[222230]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:07 compute-0 sudo[222255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:52:07 compute-0 sudo[222255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:52:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:08 compute-0 podman[222383]: 2026-01-23 18:52:08.276685709 +0000 UTC m=+0.047530637 container create 18e4a248788ad41a031ebb8fb7a08f0b140804e0fac1ecf9ab5f67ebb4f4d96f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_banzai, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:52:08 compute-0 systemd[1]: Started libpod-conmon-18e4a248788ad41a031ebb8fb7a08f0b140804e0fac1ecf9ab5f67ebb4f4d96f.scope.
Jan 23 18:52:08 compute-0 sudo[222429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uctpbicdbfpovvmcniiqivuqgkbalurz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194328.0194497-343-6492185020042/AnsiballZ_file.py'
Jan 23 18:52:08 compute-0 sudo[222429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:08 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:52:08 compute-0 podman[222383]: 2026-01-23 18:52:08.257404561 +0000 UTC m=+0.028249509 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:52:08 compute-0 podman[222383]: 2026-01-23 18:52:08.360453662 +0000 UTC m=+0.131298620 container init 18e4a248788ad41a031ebb8fb7a08f0b140804e0fac1ecf9ab5f67ebb4f4d96f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_banzai, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:52:08 compute-0 podman[222383]: 2026-01-23 18:52:08.36699232 +0000 UTC m=+0.137837248 container start 18e4a248788ad41a031ebb8fb7a08f0b140804e0fac1ecf9ab5f67ebb4f4d96f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_banzai, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:52:08 compute-0 podman[222383]: 2026-01-23 18:52:08.370662937 +0000 UTC m=+0.141507865 container attach 18e4a248788ad41a031ebb8fb7a08f0b140804e0fac1ecf9ab5f67ebb4f4d96f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:52:08 compute-0 interesting_banzai[222433]: 167 167
Jan 23 18:52:08 compute-0 systemd[1]: libpod-18e4a248788ad41a031ebb8fb7a08f0b140804e0fac1ecf9ab5f67ebb4f4d96f.scope: Deactivated successfully.
Jan 23 18:52:08 compute-0 podman[222383]: 2026-01-23 18:52:08.374214713 +0000 UTC m=+0.145059641 container died 18e4a248788ad41a031ebb8fb7a08f0b140804e0fac1ecf9ab5f67ebb4f4d96f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_banzai, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 18:52:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-086f40b3f81ea8dd9a969602e5d8eefef4adc51374faf1c89f6bd5363796e5a8-merged.mount: Deactivated successfully.
Jan 23 18:52:08 compute-0 podman[222383]: 2026-01-23 18:52:08.419330787 +0000 UTC m=+0.190175735 container remove 18e4a248788ad41a031ebb8fb7a08f0b140804e0fac1ecf9ab5f67ebb4f4d96f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 18:52:08 compute-0 systemd[1]: libpod-conmon-18e4a248788ad41a031ebb8fb7a08f0b140804e0fac1ecf9ab5f67ebb4f4d96f.scope: Deactivated successfully.
Jan 23 18:52:08 compute-0 python3.9[222435]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 23 18:52:08 compute-0 sudo[222429]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:08 compute-0 podman[222457]: 2026-01-23 18:52:08.601238596 +0000 UTC m=+0.041959499 container create cef8285535fcc75d53f631470bed726b287419d1568c7ff43df25e3b7b585869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bose, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:52:08 compute-0 systemd[1]: Started libpod-conmon-cef8285535fcc75d53f631470bed726b287419d1568c7ff43df25e3b7b585869.scope.
Jan 23 18:52:08 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:52:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a70d77def2bf72718657a3658e4c3b41cd509655e0a59e41e7d2212ff99d30c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:52:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a70d77def2bf72718657a3658e4c3b41cd509655e0a59e41e7d2212ff99d30c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:52:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a70d77def2bf72718657a3658e4c3b41cd509655e0a59e41e7d2212ff99d30c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:52:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a70d77def2bf72718657a3658e4c3b41cd509655e0a59e41e7d2212ff99d30c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:52:08 compute-0 podman[222457]: 2026-01-23 18:52:08.672839791 +0000 UTC m=+0.113560704 container init cef8285535fcc75d53f631470bed726b287419d1568c7ff43df25e3b7b585869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 23 18:52:08 compute-0 podman[222457]: 2026-01-23 18:52:08.582612952 +0000 UTC m=+0.023333875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:52:08 compute-0 podman[222457]: 2026-01-23 18:52:08.680116425 +0000 UTC m=+0.120837328 container start cef8285535fcc75d53f631470bed726b287419d1568c7ff43df25e3b7b585869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:52:08 compute-0 podman[222457]: 2026-01-23 18:52:08.683544748 +0000 UTC m=+0.124265661 container attach cef8285535fcc75d53f631470bed726b287419d1568c7ff43df25e3b7b585869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bose, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 23 18:52:08 compute-0 ceph-mon[75097]: pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:08 compute-0 sudo[222631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joocbynydrquybajnemhekrtbjndkvwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194328.7354352-351-189997291531262/AnsiballZ_modprobe.py'
Jan 23 18:52:08 compute-0 priceless_bose[222497]: {
Jan 23 18:52:08 compute-0 priceless_bose[222497]:     "0": [
Jan 23 18:52:08 compute-0 priceless_bose[222497]:         {
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "devices": [
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "/dev/loop3"
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             ],
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "lv_name": "ceph_lv0",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "lv_size": "21470642176",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "name": "ceph_lv0",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "tags": {
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.cluster_name": "ceph",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.crush_device_class": "",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.encrypted": "0",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.objectstore": "bluestore",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.osd_id": "0",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.type": "block",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.vdo": "0",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.with_tpm": "0"
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             },
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "type": "block",
Jan 23 18:52:08 compute-0 sudo[222631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "vg_name": "ceph_vg0"
Jan 23 18:52:08 compute-0 priceless_bose[222497]:         }
Jan 23 18:52:08 compute-0 priceless_bose[222497]:     ],
Jan 23 18:52:08 compute-0 priceless_bose[222497]:     "1": [
Jan 23 18:52:08 compute-0 priceless_bose[222497]:         {
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "devices": [
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "/dev/loop4"
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             ],
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "lv_name": "ceph_lv1",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "lv_size": "21470642176",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "name": "ceph_lv1",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "tags": {
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.cluster_name": "ceph",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.crush_device_class": "",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.encrypted": "0",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.objectstore": "bluestore",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.osd_id": "1",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.type": "block",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.vdo": "0",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.with_tpm": "0"
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             },
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "type": "block",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "vg_name": "ceph_vg1"
Jan 23 18:52:08 compute-0 priceless_bose[222497]:         }
Jan 23 18:52:08 compute-0 priceless_bose[222497]:     ],
Jan 23 18:52:08 compute-0 priceless_bose[222497]:     "2": [
Jan 23 18:52:08 compute-0 priceless_bose[222497]:         {
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "devices": [
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "/dev/loop5"
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             ],
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "lv_name": "ceph_lv2",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "lv_size": "21470642176",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "name": "ceph_lv2",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "tags": {
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.cluster_name": "ceph",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.crush_device_class": "",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.encrypted": "0",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.objectstore": "bluestore",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.osd_id": "2",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.type": "block",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.vdo": "0",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:                 "ceph.with_tpm": "0"
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             },
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "type": "block",
Jan 23 18:52:08 compute-0 priceless_bose[222497]:             "vg_name": "ceph_vg2"
Jan 23 18:52:08 compute-0 priceless_bose[222497]:         }
Jan 23 18:52:08 compute-0 priceless_bose[222497]:     ]
Jan 23 18:52:08 compute-0 priceless_bose[222497]: }
Jan 23 18:52:09 compute-0 systemd[1]: libpod-cef8285535fcc75d53f631470bed726b287419d1568c7ff43df25e3b7b585869.scope: Deactivated successfully.
Jan 23 18:52:09 compute-0 podman[222457]: 2026-01-23 18:52:09.010192819 +0000 UTC m=+0.450913742 container died cef8285535fcc75d53f631470bed726b287419d1568c7ff43df25e3b7b585869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bose, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:52:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a70d77def2bf72718657a3658e4c3b41cd509655e0a59e41e7d2212ff99d30c-merged.mount: Deactivated successfully.
Jan 23 18:52:09 compute-0 podman[222457]: 2026-01-23 18:52:09.060039144 +0000 UTC m=+0.500760047 container remove cef8285535fcc75d53f631470bed726b287419d1568c7ff43df25e3b7b585869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bose, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:52:09 compute-0 systemd[1]: libpod-conmon-cef8285535fcc75d53f631470bed726b287419d1568c7ff43df25e3b7b585869.scope: Deactivated successfully.
Jan 23 18:52:09 compute-0 sudo[222255]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:09 compute-0 sudo[222646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:52:09 compute-0 sudo[222646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:52:09 compute-0 sudo[222646]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:09 compute-0 python3.9[222633]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 23 18:52:09 compute-0 kernel: Key type psk registered
Jan 23 18:52:09 compute-0 sudo[222671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:52:09 compute-0 sudo[222671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:52:09 compute-0 sudo[222631]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:09 compute-0 podman[222767]: 2026-01-23 18:52:09.477274761 +0000 UTC m=+0.039676411 container create b7f9a810552333ae3a90dd4b837d0b65c9c69cd38a0d73260b59b446556a0be9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_moser, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:52:09 compute-0 systemd[1]: Started libpod-conmon-b7f9a810552333ae3a90dd4b837d0b65c9c69cd38a0d73260b59b446556a0be9.scope.
Jan 23 18:52:09 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:52:09 compute-0 podman[222767]: 2026-01-23 18:52:09.549506859 +0000 UTC m=+0.111908529 container init b7f9a810552333ae3a90dd4b837d0b65c9c69cd38a0d73260b59b446556a0be9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_moser, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 23 18:52:09 compute-0 podman[222767]: 2026-01-23 18:52:09.458981434 +0000 UTC m=+0.021383114 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:52:09 compute-0 podman[222767]: 2026-01-23 18:52:09.559718495 +0000 UTC m=+0.122120145 container start b7f9a810552333ae3a90dd4b837d0b65c9c69cd38a0d73260b59b446556a0be9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 23 18:52:09 compute-0 podman[222767]: 2026-01-23 18:52:09.56322152 +0000 UTC m=+0.125623200 container attach b7f9a810552333ae3a90dd4b837d0b65c9c69cd38a0d73260b59b446556a0be9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_moser, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 23 18:52:09 compute-0 strange_moser[222811]: 167 167
Jan 23 18:52:09 compute-0 systemd[1]: libpod-b7f9a810552333ae3a90dd4b837d0b65c9c69cd38a0d73260b59b446556a0be9.scope: Deactivated successfully.
Jan 23 18:52:09 compute-0 podman[222767]: 2026-01-23 18:52:09.567038321 +0000 UTC m=+0.129439971 container died b7f9a810552333ae3a90dd4b837d0b65c9c69cd38a0d73260b59b446556a0be9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 23 18:52:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-1621709b6c14b2cdc31565b160893b4fd870573285940720bc85930508f627a2-merged.mount: Deactivated successfully.
Jan 23 18:52:09 compute-0 podman[222767]: 2026-01-23 18:52:09.609469358 +0000 UTC m=+0.171871048 container remove b7f9a810552333ae3a90dd4b837d0b65c9c69cd38a0d73260b59b446556a0be9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_moser, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:52:09 compute-0 systemd[1]: libpod-conmon-b7f9a810552333ae3a90dd4b837d0b65c9c69cd38a0d73260b59b446556a0be9.scope: Deactivated successfully.
Jan 23 18:52:09 compute-0 sudo[222902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gokxdcfynhgwajtgqjnuublqjrofscqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194329.4277909-359-244832188271169/AnsiballZ_stat.py'
Jan 23 18:52:09 compute-0 sudo[222902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:09 compute-0 podman[222910]: 2026-01-23 18:52:09.768213346 +0000 UTC m=+0.039711910 container create 678fb40c176d07472c6cdec070040721df0f9302c61c4c3ce365e3055eb12e6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 18:52:09 compute-0 systemd[1]: Started libpod-conmon-678fb40c176d07472c6cdec070040721df0f9302c61c4c3ce365e3055eb12e6b.scope.
Jan 23 18:52:09 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:52:09 compute-0 podman[222910]: 2026-01-23 18:52:09.750047152 +0000 UTC m=+0.021545706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d466ddabce64837257f00ef190ffcef975138f996a9e004463f596bfe8a4009/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d466ddabce64837257f00ef190ffcef975138f996a9e004463f596bfe8a4009/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d466ddabce64837257f00ef190ffcef975138f996a9e004463f596bfe8a4009/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d466ddabce64837257f00ef190ffcef975138f996a9e004463f596bfe8a4009/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:52:09 compute-0 podman[222910]: 2026-01-23 18:52:09.86623941 +0000 UTC m=+0.137737964 container init 678fb40c176d07472c6cdec070040721df0f9302c61c4c3ce365e3055eb12e6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:52:09 compute-0 podman[222910]: 2026-01-23 18:52:09.873432262 +0000 UTC m=+0.144930796 container start 678fb40c176d07472c6cdec070040721df0f9302c61c4c3ce365e3055eb12e6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:52:09 compute-0 podman[222910]: 2026-01-23 18:52:09.876916607 +0000 UTC m=+0.148415171 container attach 678fb40c176d07472c6cdec070040721df0f9302c61c4c3ce365e3055eb12e6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:52:09 compute-0 python3.9[222905]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:52:09 compute-0 sudo[222902]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:10 compute-0 sudo[223072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scjsygeetmpzqcagtthsmhaivbtmckrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194329.4277909-359-244832188271169/AnsiballZ_copy.py'
Jan 23 18:52:10 compute-0 sudo[223072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:10 compute-0 python3.9[223080]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769194329.4277909-359-244832188271169/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:10 compute-0 sudo[223072]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:10 compute-0 lvm[223125]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:52:10 compute-0 lvm[223125]: VG ceph_vg0 finished
Jan 23 18:52:10 compute-0 lvm[223134]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:52:10 compute-0 lvm[223134]: VG ceph_vg1 finished
Jan 23 18:52:10 compute-0 lvm[223152]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:52:10 compute-0 lvm[223152]: VG ceph_vg2 finished
Jan 23 18:52:10 compute-0 optimistic_golick[222927]: {}
Jan 23 18:52:10 compute-0 systemd[1]: libpod-678fb40c176d07472c6cdec070040721df0f9302c61c4c3ce365e3055eb12e6b.scope: Deactivated successfully.
Jan 23 18:52:10 compute-0 systemd[1]: libpod-678fb40c176d07472c6cdec070040721df0f9302c61c4c3ce365e3055eb12e6b.scope: Consumed 1.289s CPU time.
Jan 23 18:52:10 compute-0 podman[222910]: 2026-01-23 18:52:10.680626721 +0000 UTC m=+0.952125275 container died 678fb40c176d07472c6cdec070040721df0f9302c61c4c3ce365e3055eb12e6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:52:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d466ddabce64837257f00ef190ffcef975138f996a9e004463f596bfe8a4009-merged.mount: Deactivated successfully.
Jan 23 18:52:10 compute-0 podman[222910]: 2026-01-23 18:52:10.727737139 +0000 UTC m=+0.999235673 container remove 678fb40c176d07472c6cdec070040721df0f9302c61c4c3ce365e3055eb12e6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:52:10 compute-0 systemd[1]: libpod-conmon-678fb40c176d07472c6cdec070040721df0f9302c61c4c3ce365e3055eb12e6b.scope: Deactivated successfully.
Jan 23 18:52:10 compute-0 sudo[222671]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:52:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:52:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:52:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:52:10 compute-0 sudo[223242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:52:10 compute-0 sudo[223242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:52:10 compute-0 sudo[223242]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:10 compute-0 sudo[223317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxqdgzxoqnhwyxnhkpkzczbjelezibfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194330.6537342-375-244604489520066/AnsiballZ_lineinfile.py'
Jan 23 18:52:10 compute-0 sudo[223317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:11 compute-0 python3.9[223319]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:11 compute-0 sudo[223317]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:11 compute-0 ceph-mon[75097]: pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:52:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:52:11 compute-0 sudo[223469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otovougewticvljienzjcgccisuvjcht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194331.317519-383-48963487788383/AnsiballZ_systemd.py'
Jan 23 18:52:11 compute-0 sudo[223469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:11 compute-0 python3.9[223471]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 18:52:11 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 23 18:52:11 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 23 18:52:11 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 23 18:52:11 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 23 18:52:11 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 23 18:52:12 compute-0 sudo[223469]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:12 compute-0 podman[223472]: 2026-01-23 18:52:12.007988375 +0000 UTC m=+0.093973089 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 23 18:52:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:12 compute-0 sudo[223648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rksfjjcmaotyaklqrfmgwydnayzzqvdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194332.186681-391-22889877378208/AnsiballZ_dnf.py'
Jan 23 18:52:12 compute-0 sudo[223648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:12 compute-0 python3.9[223650]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 18:52:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:52:13 compute-0 ceph-mon[75097]: pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:52:13.571 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:52:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:52:13.572 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:52:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:52:13.572 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:52:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:14 compute-0 systemd[1]: Reloading.
Jan 23 18:52:14 compute-0 systemd-sysv-generator[223683]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:52:14 compute-0 systemd-rc-local-generator[223678]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:52:15 compute-0 systemd[1]: Reloading.
Jan 23 18:52:15 compute-0 systemd-rc-local-generator[223716]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:52:15 compute-0 systemd-sysv-generator[223723]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:52:15 compute-0 ceph-mon[75097]: pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:15 compute-0 systemd-logind[792]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 23 18:52:15 compute-0 systemd-logind[792]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 23 18:52:15 compute-0 lvm[223764]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:52:15 compute-0 lvm[223765]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:52:15 compute-0 lvm[223764]: VG ceph_vg2 finished
Jan 23 18:52:15 compute-0 lvm[223765]: VG ceph_vg1 finished
Jan 23 18:52:15 compute-0 lvm[223766]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:52:15 compute-0 lvm[223766]: VG ceph_vg0 finished
Jan 23 18:52:15 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 18:52:16 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 18:52:16 compute-0 systemd[1]: Reloading.
Jan 23 18:52:16 compute-0 systemd-sysv-generator[223822]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:52:16 compute-0 systemd-rc-local-generator[223819]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:52:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:16 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 18:52:16 compute-0 sudo[223648]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:17 compute-0 sudo[225121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkvkwsahbciegxtknmvcnjtyebpsceuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194337.0598927-399-248847230986854/AnsiballZ_systemd_service.py'
Jan 23 18:52:17 compute-0 sudo[225121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:17 compute-0 ceph-mon[75097]: pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:17 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 18:52:17 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 18:52:17 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.588s CPU time.
Jan 23 18:52:17 compute-0 systemd[1]: run-r6d6635bd0d8646d79dfcba11cc034cb5.service: Deactivated successfully.
Jan 23 18:52:17 compute-0 python3.9[225123]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 18:52:17 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 23 18:52:17 compute-0 iscsid[218024]: iscsid shutting down.
Jan 23 18:52:17 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 23 18:52:17 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 23 18:52:17 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 23 18:52:17 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 23 18:52:17 compute-0 systemd[1]: Started Open-iSCSI.
Jan 23 18:52:17 compute-0 sudo[225121]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:52:18 compute-0 sudo[225278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbkgotcnprguppgfdzxnftiqodiovruo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194337.9087634-407-158856053865385/AnsiballZ_systemd_service.py'
Jan 23 18:52:18 compute-0 sudo[225278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:18 compute-0 python3.9[225280]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 18:52:18 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 23 18:52:18 compute-0 multipathd[222184]: exit (signal)
Jan 23 18:52:18 compute-0 multipathd[222184]: --------shut down-------
Jan 23 18:52:18 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 23 18:52:18 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 23 18:52:18 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 23 18:52:18 compute-0 multipathd[225286]: --------start up--------
Jan 23 18:52:18 compute-0 multipathd[225286]: read /etc/multipath.conf
Jan 23 18:52:18 compute-0 multipathd[225286]: path checkers start up
Jan 23 18:52:18 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 23 18:52:18 compute-0 sudo[225278]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:19 compute-0 ceph-mon[75097]: pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:19 compute-0 python3.9[225443]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 18:52:20 compute-0 sudo[225597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqzckswvngfbvecrxopbwhezvnjmynwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194339.9638848-425-153665104721662/AnsiballZ_file.py'
Jan 23 18:52:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:20 compute-0 sudo[225597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:20 compute-0 python3.9[225599]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:20 compute-0 sudo[225597]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:21 compute-0 sudo[225749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlkzriyvirkkydzkhblxbapwabmedqmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194340.7785697-436-34208453283150/AnsiballZ_systemd_service.py'
Jan 23 18:52:21 compute-0 sudo[225749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:21 compute-0 python3.9[225751]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 18:52:21 compute-0 ceph-mon[75097]: pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:21 compute-0 systemd[1]: Reloading.
Jan 23 18:52:21 compute-0 systemd-sysv-generator[225791]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:52:21 compute-0 systemd-rc-local-generator[225788]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:52:21 compute-0 podman[225753]: 2026-01-23 18:52:21.535475626 +0000 UTC m=+0.099802043 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 23 18:52:21 compute-0 sudo[225749]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:22 compute-0 python3.9[225954]: ansible-ansible.builtin.service_facts Invoked
Jan 23 18:52:22 compute-0 network[225971]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 18:52:22 compute-0 network[225972]: 'network-scripts' will be removed from distribution in near future.
Jan 23 18:52:22 compute-0 network[225973]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 18:52:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:52:23 compute-0 ceph-mon[75097]: pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:25 compute-0 ceph-mon[75097]: pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:26 compute-0 sudo[226244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kydntomxtjfjxdgjakfvoqlpmsrsqysz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194346.0773423-455-187974472924924/AnsiballZ_systemd_service.py'
Jan 23 18:52:26 compute-0 sudo[226244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:26 compute-0 python3.9[226246]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:52:26 compute-0 sudo[226244]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:27 compute-0 sudo[226397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xviepomkexujytvruvxizrzzuocdqrnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194346.8348365-455-10740210470268/AnsiballZ_systemd_service.py'
Jan 23 18:52:27 compute-0 sudo[226397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:27 compute-0 ceph-mon[75097]: pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:27 compute-0 python3.9[226399]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:52:27 compute-0 sudo[226397]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:52:27 compute-0 sudo[226550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkooglnhvdtkxapcewjqziwptzylfhge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194347.6546118-455-47199543890771/AnsiballZ_systemd_service.py'
Jan 23 18:52:27 compute-0 sudo[226550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:28 compute-0 python3.9[226552]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:52:28 compute-0 sudo[226550]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:28 compute-0 sudo[226703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evvvmjpcjrvvrqpymqywlizsgfudzjth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194348.4613523-455-257622516643123/AnsiballZ_systemd_service.py'
Jan 23 18:52:28 compute-0 sudo[226703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:52:28
Jan 23 18:52:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:52:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:52:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', '.rgw.root', '.mgr', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'vms', 'volumes']
Jan 23 18:52:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:52:29 compute-0 python3.9[226705]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:52:29 compute-0 sudo[226703]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:29 compute-0 ceph-mon[75097]: pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:29 compute-0 sudo[226856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clegzlcvhegdjtqsqnebveacpfwoebte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194349.1783512-455-263106603998034/AnsiballZ_systemd_service.py'
Jan 23 18:52:29 compute-0 sudo[226856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:29 compute-0 python3.9[226858]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:52:29 compute-0 sudo[226856]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:52:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:52:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:52:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:52:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:52:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:52:30 compute-0 sudo[227009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvtpwshmvndixttncurmegamekmstzai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194349.9111586-455-166443718252078/AnsiballZ_systemd_service.py'
Jan 23 18:52:30 compute-0 sudo[227009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:30 compute-0 ceph-mon[75097]: pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:30 compute-0 python3.9[227011]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:52:30 compute-0 sudo[227009]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:52:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:52:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:52:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:52:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:52:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:52:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:52:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:52:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:52:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:52:30 compute-0 sudo[227162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilftkpsjrofxdrhjwvpjjlwjeheohbsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194350.6779447-455-9573658891618/AnsiballZ_systemd_service.py'
Jan 23 18:52:30 compute-0 sudo[227162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:31 compute-0 python3.9[227164]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:52:31 compute-0 sudo[227162]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:31 compute-0 sudo[227315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkkytaudeuginfjuisfkhtzzjnmpwpzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194351.45519-455-279717795320053/AnsiballZ_systemd_service.py'
Jan 23 18:52:31 compute-0 sudo[227315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:32 compute-0 python3.9[227317]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:52:32 compute-0 sudo[227315]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:32 compute-0 sudo[227468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpxwjevaymfilfxtisgfnbqakddpmhxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194352.4409878-514-221237805860584/AnsiballZ_file.py'
Jan 23 18:52:32 compute-0 sudo[227468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:52:32 compute-0 python3.9[227470]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:32 compute-0 sudo[227468]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:33 compute-0 sudo[227620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akhfsfquntubyoxcqyyvjjigzubgdied ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194353.040908-514-224237090749807/AnsiballZ_file.py'
Jan 23 18:52:33 compute-0 sudo[227620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:33 compute-0 ceph-mon[75097]: pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:33 compute-0 python3.9[227622]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:33 compute-0 sudo[227620]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:33 compute-0 sudo[227772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyuvyfpgihulbuuiwxoenlivkthnwtlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194353.6724007-514-270952188348911/AnsiballZ_file.py'
Jan 23 18:52:33 compute-0 sudo[227772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:34 compute-0 python3.9[227774]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:34 compute-0 sudo[227772]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:34 compute-0 sudo[227924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqukzrpbugrhltjvqtnmlotlwilzgnot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194354.279523-514-187192396758676/AnsiballZ_file.py'
Jan 23 18:52:34 compute-0 sudo[227924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:34 compute-0 python3.9[227926]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:34 compute-0 sudo[227924]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:35 compute-0 sudo[228076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htoodhiwhmbbsmqvxjoykhmqeryitbnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194354.9191828-514-47437589901836/AnsiballZ_file.py'
Jan 23 18:52:35 compute-0 sudo[228076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:35 compute-0 python3.9[228078]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:35 compute-0 sudo[228076]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:35 compute-0 ceph-mon[75097]: pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:35 compute-0 sudo[228228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jozryabyjnajtosajxzwnqxfzdyttkiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194355.4659088-514-97867008968282/AnsiballZ_file.py'
Jan 23 18:52:35 compute-0 sudo[228228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:35 compute-0 python3.9[228230]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:35 compute-0 sudo[228228]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:36 compute-0 sudo[228380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeevpaqcodtvcvztzekgfhxklotrxqax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194356.0348365-514-111929057132830/AnsiballZ_file.py'
Jan 23 18:52:36 compute-0 sudo[228380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:36 compute-0 python3.9[228382]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:36 compute-0 sudo[228380]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:36 compute-0 sudo[228532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqfmazcfyskighhgkagtjrqizvvakfto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194356.660577-514-147851659757154/AnsiballZ_file.py'
Jan 23 18:52:36 compute-0 sudo[228532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:37 compute-0 python3.9[228534]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:37 compute-0 sudo[228532]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:37 compute-0 ceph-mon[75097]: pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:37 compute-0 sudo[228684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phpkxylkmgyzmkrymlndnqlxfwbcauoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194357.2574286-571-101809930499128/AnsiballZ_file.py'
Jan 23 18:52:37 compute-0 sudo[228684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:37 compute-0 python3.9[228686]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:37 compute-0 sudo[228684]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:52:38 compute-0 sudo[228836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auonpnfwrxrddrngagyxyckurxnyunut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194357.8751965-571-50788726897840/AnsiballZ_file.py'
Jan 23 18:52:38 compute-0 sudo[228836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:38 compute-0 python3.9[228838]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:38 compute-0 sudo[228836]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:38 compute-0 sudo[228988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tikomqwmsfyuxslziclgdxciasaygntw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194358.5011811-571-243189363798994/AnsiballZ_file.py'
Jan 23 18:52:38 compute-0 sudo[228988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:38 compute-0 python3.9[228990]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:39 compute-0 sudo[228988]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:39 compute-0 ceph-mon[75097]: pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:39 compute-0 sudo[229140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmdzzonmjxorplqcegclannxhdqbllhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194359.1239398-571-128673849313820/AnsiballZ_file.py'
Jan 23 18:52:39 compute-0 sudo[229140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:39 compute-0 python3.9[229142]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:39 compute-0 sudo[229140]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:39 compute-0 sudo[229292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skfonshbvplmtruecvpebqqrzfmuvwdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194359.7249641-571-138496517226813/AnsiballZ_file.py'
Jan 23 18:52:39 compute-0 sudo[229292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:40 compute-0 python3.9[229294]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:40 compute-0 sudo[229292]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:40 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:52:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:52:40 compute-0 ceph-mon[75097]: pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:40 compute-0 sudo[229445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtvvbcoyvbmfdigtagedyhmauthziegz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194360.3310688-571-234434877648000/AnsiballZ_file.py'
Jan 23 18:52:40 compute-0 sudo[229445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:40 compute-0 python3.9[229447]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:40 compute-0 sudo[229445]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:41 compute-0 sudo[229597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnonyokztbtvlnetsuluuplrddmfgtcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194360.9147263-571-233264129598108/AnsiballZ_file.py'
Jan 23 18:52:41 compute-0 sudo[229597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:41 compute-0 python3.9[229599]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:41 compute-0 sudo[229597]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:41 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 23 18:52:41 compute-0 sudo[229750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrtepxzpnwdaveatghhonotytcodazuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194361.4761066-571-1854183526614/AnsiballZ_file.py'
Jan 23 18:52:41 compute-0 sudo[229750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:41 compute-0 python3.9[229752]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:52:41 compute-0 sudo[229750]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:42 compute-0 sudo[229908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oapaoqndhmfwpvrljxljddlqssfksldg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194362.15733-629-19047111722987/AnsiballZ_command.py'
Jan 23 18:52:42 compute-0 sudo[229908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:42 compute-0 podman[229876]: 2026-01-23 18:52:42.540760896 +0000 UTC m=+0.115660407 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 23 18:52:42 compute-0 python3.9[229921]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:52:42 compute-0 sudo[229908]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:52:43 compute-0 ceph-mon[75097]: pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:43 compute-0 python3.9[230082]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 18:52:44 compute-0 sudo[230232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpmlqiqpqdridsmqmkgpuqrxvpcxkvwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194363.7676284-647-114525249398046/AnsiballZ_systemd_service.py'
Jan 23 18:52:44 compute-0 sudo[230232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:44 compute-0 python3.9[230234]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 18:52:44 compute-0 systemd[1]: Reloading.
Jan 23 18:52:44 compute-0 systemd-rc-local-generator[230260]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:52:44 compute-0 systemd-sysv-generator[230263]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:52:44 compute-0 sudo[230232]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:44 compute-0 ceph-mon[75097]: pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:45 compute-0 sudo[230419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqyhuqcwtrazzqptzgyvdeqpdktblawo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194364.8469808-655-86661420708142/AnsiballZ_command.py'
Jan 23 18:52:45 compute-0 sudo[230419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:45 compute-0 python3.9[230421]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:52:45 compute-0 sudo[230419]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:45 compute-0 sudo[230572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olaevpnlognqzdfznwgeihcqlsjulxvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194365.514361-655-278327051776324/AnsiballZ_command.py'
Jan 23 18:52:45 compute-0 sudo[230572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:45 compute-0 python3.9[230574]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:52:45 compute-0 sudo[230572]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:46 compute-0 sudo[230725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqsvrykaumspidcnueqgqkhkwjmsccev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194366.1139586-655-259495142859471/AnsiballZ_command.py'
Jan 23 18:52:46 compute-0 sudo[230725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:46 compute-0 python3.9[230727]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:52:46 compute-0 sudo[230725]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:47 compute-0 sudo[230878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nemjfexyxaomxooqfdulpeaxhjmxdvwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194366.7592936-655-91107037497381/AnsiballZ_command.py'
Jan 23 18:52:47 compute-0 sudo[230878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:47 compute-0 python3.9[230880]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:52:47 compute-0 sudo[230878]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:47 compute-0 ceph-mon[75097]: pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:47 compute-0 sudo[231031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umhdeclxupjdxicizlovogqpfhoyzuvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194367.4618633-655-51993826800074/AnsiballZ_command.py'
Jan 23 18:52:47 compute-0 sudo[231031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:52:47 compute-0 python3.9[231033]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:52:47 compute-0 sudo[231031]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:48 compute-0 sudo[231184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgljfcsecxsmunuhfolqxpfhwrgejisg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194368.079308-655-225236732330291/AnsiballZ_command.py'
Jan 23 18:52:48 compute-0 sudo[231184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:48 compute-0 python3.9[231186]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:52:48 compute-0 sudo[231184]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:48 compute-0 sudo[231337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdirjoxvobwcujrppdrmvfapzvnbkujz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194368.695798-655-217465871867129/AnsiballZ_command.py'
Jan 23 18:52:48 compute-0 sudo[231337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:49 compute-0 python3.9[231339]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:52:49 compute-0 sudo[231337]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:49 compute-0 ceph-mon[75097]: pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:49 compute-0 sudo[231490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fasalkmtimcqlfntoqvskrgusmlihsae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194369.3523095-655-165166874478775/AnsiballZ_command.py'
Jan 23 18:52:49 compute-0 sudo[231490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:49 compute-0 python3.9[231492]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 18:52:49 compute-0 sudo[231490]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:50 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 23 18:52:50 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 23 18:52:50 compute-0 sudo[231645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfcuqbrdgyqafzcaxtwvgaovfgupimbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194370.6326413-734-18117555351679/AnsiballZ_file.py'
Jan 23 18:52:50 compute-0 sudo[231645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:51 compute-0 python3.9[231647]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:52:51 compute-0 sudo[231645]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:51 compute-0 ceph-mon[75097]: pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:51 compute-0 sudo[231797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ticrvbfhldqxpsxjlexqxkdgzaraybwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194371.2720492-734-29798577966414/AnsiballZ_file.py'
Jan 23 18:52:51 compute-0 sudo[231797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:51 compute-0 python3.9[231799]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:52:51 compute-0 sudo[231797]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:51 compute-0 podman[231800]: 2026-01-23 18:52:51.846259879 +0000 UTC m=+0.068250862 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 18:52:52 compute-0 sudo[231968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytyunciszijvztmbplasodanydbfmguy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194371.9198956-734-156801269754135/AnsiballZ_file.py'
Jan 23 18:52:52 compute-0 sudo[231968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:52 compute-0 python3.9[231970]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:52:52 compute-0 sudo[231968]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:52:52 compute-0 sudo[232120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpjvmhjyquiosucohfkfwhqwsekpohba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194372.5815327-756-217089969416173/AnsiballZ_file.py'
Jan 23 18:52:52 compute-0 sudo[232120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:53 compute-0 python3.9[232122]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:52:53 compute-0 sudo[232120]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:53 compute-0 ceph-mon[75097]: pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:53 compute-0 sudo[232272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eozmhqjqxouhaonmwazuiosityslopio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194373.1759803-756-118613038991058/AnsiballZ_file.py'
Jan 23 18:52:53 compute-0 sudo[232272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:53 compute-0 python3.9[232274]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:52:53 compute-0 sudo[232272]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:54 compute-0 sudo[232424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlxzgklipfpkravbgpfpyjgdupegymzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194373.86278-756-232235391868754/AnsiballZ_file.py'
Jan 23 18:52:54 compute-0 sudo[232424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:54 compute-0 python3.9[232426]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:52:54 compute-0 sudo[232424]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:54 compute-0 sudo[232576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmalzykecrctkmzadinlpsfpsagpvygk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194374.5021293-756-259614312570769/AnsiballZ_file.py'
Jan 23 18:52:54 compute-0 sudo[232576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:54 compute-0 python3.9[232578]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:52:54 compute-0 sudo[232576]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:55 compute-0 ceph-mon[75097]: pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:55 compute-0 sudo[232728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdnkmvxfzauzhpikqndrifmpejpkfswy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194375.0949607-756-267903736598063/AnsiballZ_file.py'
Jan 23 18:52:55 compute-0 sudo[232728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:55 compute-0 python3.9[232730]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:52:55 compute-0 sudo[232728]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:56 compute-0 sudo[232880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-malrijufdrzpgnqcxkapqpujdvxikzuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194375.866897-756-211308035681392/AnsiballZ_file.py'
Jan 23 18:52:56 compute-0 sudo[232880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:56 compute-0 python3.9[232882]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:52:56 compute-0 sudo[232880]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:56 compute-0 sudo[233032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkszslhmzwntssclfxbqoyoguvsggvfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194376.521852-756-191298443657630/AnsiballZ_file.py'
Jan 23 18:52:56 compute-0 sudo[233032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:52:56 compute-0 python3.9[233034]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:52:56 compute-0 sudo[233032]: pam_unix(sudo:session): session closed for user root
Jan 23 18:52:57 compute-0 ceph-mon[75097]: pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:52:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:52:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:00 compute-0 ceph-mon[75097]: pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:53:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:53:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:53:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:53:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:53:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:53:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:01 compute-0 ceph-mon[75097]: pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:02 compute-0 sudo[233184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfyxiptxcldgxmqsepollkhsaockezjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194381.7087188-945-114791950675705/AnsiballZ_getent.py'
Jan 23 18:53:02 compute-0 sudo[233184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:02 compute-0 python3.9[233186]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 23 18:53:02 compute-0 sudo[233184]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:02 compute-0 ceph-mon[75097]: pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:53:02 compute-0 sudo[233337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apwsewhahvarzjljxutoxzbrjvkuacug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194382.4872148-953-127318452502986/AnsiballZ_group.py'
Jan 23 18:53:02 compute-0 sudo[233337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:03 compute-0 python3.9[233339]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 18:53:03 compute-0 groupadd[233340]: group added to /etc/group: name=nova, GID=42436
Jan 23 18:53:03 compute-0 groupadd[233340]: group added to /etc/gshadow: name=nova
Jan 23 18:53:03 compute-0 groupadd[233340]: new group: name=nova, GID=42436
Jan 23 18:53:03 compute-0 sudo[233337]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:03 compute-0 sudo[233496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifvlibatmncobhicsjppklzucbkimgyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194383.3881812-961-278449118609121/AnsiballZ_user.py'
Jan 23 18:53:03 compute-0 sudo[233496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:04 compute-0 python3.9[233498]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 23 18:53:04 compute-0 useradd[233500]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 23 18:53:04 compute-0 useradd[233500]: add 'nova' to group 'libvirt'
Jan 23 18:53:04 compute-0 useradd[233500]: add 'nova' to shadow group 'libvirt'
Jan 23 18:53:04 compute-0 sudo[233496]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:04 compute-0 ceph-mon[75097]: pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:05 compute-0 sshd-session[233531]: Accepted publickey for zuul from 192.168.122.30 port 33326 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 18:53:05 compute-0 systemd-logind[792]: New session 50 of user zuul.
Jan 23 18:53:05 compute-0 systemd[1]: Started Session 50 of User zuul.
Jan 23 18:53:05 compute-0 sshd-session[233531]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 18:53:05 compute-0 sshd-session[233534]: Received disconnect from 192.168.122.30 port 33326:11: disconnected by user
Jan 23 18:53:05 compute-0 sshd-session[233534]: Disconnected from user zuul 192.168.122.30 port 33326
Jan 23 18:53:05 compute-0 sshd-session[233531]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:53:05 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Jan 23 18:53:05 compute-0 systemd-logind[792]: Session 50 logged out. Waiting for processes to exit.
Jan 23 18:53:05 compute-0 systemd-logind[792]: Removed session 50.
Jan 23 18:53:05 compute-0 python3.9[233684]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:53:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:06 compute-0 python3.9[233805]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769194385.4122617-986-65500323303070/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:53:06 compute-0 python3.9[233955]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:53:07 compute-0 ceph-mon[75097]: pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:07 compute-0 python3.9[234031]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:53:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:53:07 compute-0 python3.9[234181]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:53:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:08 compute-0 python3.9[234302]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769194387.5166829-986-62510915071327/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:53:09 compute-0 python3.9[234452]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:53:09 compute-0 ceph-mon[75097]: pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:09 compute-0 python3.9[234573]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769194388.600006-986-150150841275494/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:53:10 compute-0 python3.9[234723]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:53:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:10 compute-0 python3.9[234844]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769194389.7341325-986-147693991875588/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:53:10 compute-0 sudo[234875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:53:10 compute-0 sudo[234875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:53:10 compute-0 sudo[234875]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:10 compute-0 sudo[234935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:53:10 compute-0 sudo[234935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:53:11 compute-0 python3.9[235044]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:53:11 compute-0 ceph-mon[75097]: pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:11 compute-0 sudo[234935]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:53:11 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:53:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:53:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:53:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:53:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:53:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:53:11 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:53:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:53:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:53:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:53:11 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:53:11 compute-0 sudo[235171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:53:11 compute-0 sudo[235171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:53:11 compute-0 sudo[235171]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:11 compute-0 sudo[235223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:53:11 compute-0 sudo[235223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:53:11 compute-0 python3.9[235222]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769194390.9026017-986-274382899076714/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:53:11 compute-0 podman[235265]: 2026-01-23 18:53:11.937549381 +0000 UTC m=+0.045293544 container create 7b6cdbe7c0a6197f9856a4f75df7f8c221d2b69bc2056417fa78788fca62089f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 23 18:53:11 compute-0 systemd[1]: Started libpod-conmon-7b6cdbe7c0a6197f9856a4f75df7f8c221d2b69bc2056417fa78788fca62089f.scope.
Jan 23 18:53:12 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:53:12 compute-0 podman[235265]: 2026-01-23 18:53:11.916692143 +0000 UTC m=+0.024436326 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:53:12 compute-0 podman[235265]: 2026-01-23 18:53:12.028619957 +0000 UTC m=+0.136364210 container init 7b6cdbe7c0a6197f9856a4f75df7f8c221d2b69bc2056417fa78788fca62089f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:53:12 compute-0 podman[235265]: 2026-01-23 18:53:12.041106861 +0000 UTC m=+0.148851024 container start 7b6cdbe7c0a6197f9856a4f75df7f8c221d2b69bc2056417fa78788fca62089f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_cori, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 23 18:53:12 compute-0 podman[235265]: 2026-01-23 18:53:12.044730649 +0000 UTC m=+0.152474812 container attach 7b6cdbe7c0a6197f9856a4f75df7f8c221d2b69bc2056417fa78788fca62089f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:53:12 compute-0 stupefied_cori[235299]: 167 167
Jan 23 18:53:12 compute-0 systemd[1]: libpod-7b6cdbe7c0a6197f9856a4f75df7f8c221d2b69bc2056417fa78788fca62089f.scope: Deactivated successfully.
Jan 23 18:53:12 compute-0 podman[235265]: 2026-01-23 18:53:12.046402969 +0000 UTC m=+0.154147152 container died 7b6cdbe7c0a6197f9856a4f75df7f8c221d2b69bc2056417fa78788fca62089f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 23 18:53:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4cb0e7935050eced3d11101cf9a2d61c4ec8818103cb9e390dfe95c134a6db4-merged.mount: Deactivated successfully.
Jan 23 18:53:12 compute-0 podman[235265]: 2026-01-23 18:53:12.096955429 +0000 UTC m=+0.204699592 container remove 7b6cdbe7c0a6197f9856a4f75df7f8c221d2b69bc2056417fa78788fca62089f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_cori, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 18:53:12 compute-0 systemd[1]: libpod-conmon-7b6cdbe7c0a6197f9856a4f75df7f8c221d2b69bc2056417fa78788fca62089f.scope: Deactivated successfully.
Jan 23 18:53:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:12 compute-0 podman[235400]: 2026-01-23 18:53:12.291028093 +0000 UTC m=+0.068218372 container create 99a565a7998e1dffba0d609d50f877e15863b451e341861b9ce0232d6ae1d829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_sinoussi, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 18:53:12 compute-0 sudo[235464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pttnmxtgqnnxyhzmwghtjyiqwkskvyfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194392.0554767-1069-25802165265520/AnsiballZ_file.py'
Jan 23 18:53:12 compute-0 sudo[235464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:12 compute-0 systemd[1]: Started libpod-conmon-99a565a7998e1dffba0d609d50f877e15863b451e341861b9ce0232d6ae1d829.scope.
Jan 23 18:53:12 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:53:12 compute-0 podman[235400]: 2026-01-23 18:53:12.256863752 +0000 UTC m=+0.034054081 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adeac319795580844555efa9d088fc0eeff8bb9f1e5d4421fd0b7fa4660fb96a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adeac319795580844555efa9d088fc0eeff8bb9f1e5d4421fd0b7fa4660fb96a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adeac319795580844555efa9d088fc0eeff8bb9f1e5d4421fd0b7fa4660fb96a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adeac319795580844555efa9d088fc0eeff8bb9f1e5d4421fd0b7fa4660fb96a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adeac319795580844555efa9d088fc0eeff8bb9f1e5d4421fd0b7fa4660fb96a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:53:12 compute-0 podman[235400]: 2026-01-23 18:53:12.377366744 +0000 UTC m=+0.154557023 container init 99a565a7998e1dffba0d609d50f877e15863b451e341861b9ce0232d6ae1d829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 18:53:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:53:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:53:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:53:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:53:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:53:12 compute-0 podman[235400]: 2026-01-23 18:53:12.385181104 +0000 UTC m=+0.162371383 container start 99a565a7998e1dffba0d609d50f877e15863b451e341861b9ce0232d6ae1d829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 23 18:53:12 compute-0 podman[235400]: 2026-01-23 18:53:12.387781407 +0000 UTC m=+0.164971746 container attach 99a565a7998e1dffba0d609d50f877e15863b451e341861b9ce0232d6ae1d829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 18:53:12 compute-0 python3.9[235469]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:53:12 compute-0 sudo[235464]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:53:12 compute-0 elated_sinoussi[235470]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:53:12 compute-0 elated_sinoussi[235470]: --> All data devices are unavailable
Jan 23 18:53:12 compute-0 systemd[1]: libpod-99a565a7998e1dffba0d609d50f877e15863b451e341861b9ce0232d6ae1d829.scope: Deactivated successfully.
Jan 23 18:53:12 compute-0 podman[235400]: 2026-01-23 18:53:12.939179466 +0000 UTC m=+0.716369755 container died 99a565a7998e1dffba0d609d50f877e15863b451e341861b9ce0232d6ae1d829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:53:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-adeac319795580844555efa9d088fc0eeff8bb9f1e5d4421fd0b7fa4660fb96a-merged.mount: Deactivated successfully.
Jan 23 18:53:12 compute-0 sudo[235657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quxslbuptnjfgqvqitktysedmrffyaou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194392.699532-1077-272124278250947/AnsiballZ_copy.py'
Jan 23 18:53:12 compute-0 podman[235400]: 2026-01-23 18:53:12.982637053 +0000 UTC m=+0.759827332 container remove 99a565a7998e1dffba0d609d50f877e15863b451e341861b9ce0232d6ae1d829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 18:53:12 compute-0 sudo[235657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:13 compute-0 systemd[1]: libpod-conmon-99a565a7998e1dffba0d609d50f877e15863b451e341861b9ce0232d6ae1d829.scope: Deactivated successfully.
Jan 23 18:53:13 compute-0 podman[235612]: 2026-01-23 18:53:13.032442176 +0000 UTC m=+0.109310801 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 18:53:13 compute-0 sudo[235223]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:13 compute-0 sudo[235678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:53:13 compute-0 sudo[235678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:53:13 compute-0 sudo[235678]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:13 compute-0 python3.9[235669]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:53:13 compute-0 sudo[235704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:53:13 compute-0 sudo[235704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:53:13 compute-0 sudo[235657]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:13 compute-0 ceph-mon[75097]: pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:13 compute-0 podman[235821]: 2026-01-23 18:53:13.451858002 +0000 UTC m=+0.040028245 container create f8aa6e9ee83e29124e698c55a51157464f01f75ed1a858e11011961811477d4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:53:13 compute-0 systemd[1]: Started libpod-conmon-f8aa6e9ee83e29124e698c55a51157464f01f75ed1a858e11011961811477d4f.scope.
Jan 23 18:53:13 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:53:13 compute-0 podman[235821]: 2026-01-23 18:53:13.435402361 +0000 UTC m=+0.023572624 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:53:13 compute-0 podman[235821]: 2026-01-23 18:53:13.53684317 +0000 UTC m=+0.125013433 container init f8aa6e9ee83e29124e698c55a51157464f01f75ed1a858e11011961811477d4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:53:13 compute-0 podman[235821]: 2026-01-23 18:53:13.543863641 +0000 UTC m=+0.132033894 container start f8aa6e9ee83e29124e698c55a51157464f01f75ed1a858e11011961811477d4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 23 18:53:13 compute-0 podman[235821]: 2026-01-23 18:53:13.547182232 +0000 UTC m=+0.135352475 container attach f8aa6e9ee83e29124e698c55a51157464f01f75ed1a858e11011961811477d4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_napier, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 18:53:13 compute-0 magical_napier[235880]: 167 167
Jan 23 18:53:13 compute-0 systemd[1]: libpod-f8aa6e9ee83e29124e698c55a51157464f01f75ed1a858e11011961811477d4f.scope: Deactivated successfully.
Jan 23 18:53:13 compute-0 podman[235821]: 2026-01-23 18:53:13.551287862 +0000 UTC m=+0.139458125 container died f8aa6e9ee83e29124e698c55a51157464f01f75ed1a858e11011961811477d4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 23 18:53:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d73995b8efe55b32c0b0dc6a4867e369528ab0cff8b0f145fccaca77472a919-merged.mount: Deactivated successfully.
Jan 23 18:53:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:53:13.572 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:53:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:53:13.574 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:53:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:53:13.574 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:53:13 compute-0 podman[235821]: 2026-01-23 18:53:13.589863451 +0000 UTC m=+0.178033694 container remove f8aa6e9ee83e29124e698c55a51157464f01f75ed1a858e11011961811477d4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 18:53:13 compute-0 sudo[235918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrjczvvhkogjxwazjkqhdsegonyjeuoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194393.322815-1085-174470219796193/AnsiballZ_stat.py'
Jan 23 18:53:13 compute-0 systemd[1]: libpod-conmon-f8aa6e9ee83e29124e698c55a51157464f01f75ed1a858e11011961811477d4f.scope: Deactivated successfully.
Jan 23 18:53:13 compute-0 sudo[235918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:13 compute-0 podman[235933]: 2026-01-23 18:53:13.760540564 +0000 UTC m=+0.047082137 container create 4980e3d5bae01dd277b054ab9187bee528aedb266635509ec0e02298c2f9fcbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lumiere, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 18:53:13 compute-0 python3.9[235925]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:53:13 compute-0 sudo[235918]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:13 compute-0 systemd[1]: Started libpod-conmon-4980e3d5bae01dd277b054ab9187bee528aedb266635509ec0e02298c2f9fcbf.scope.
Jan 23 18:53:13 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:53:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f448ebfa7c6c44e74ba69e2c203691163af3ad7a66dceaa19cea024ea6df91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f448ebfa7c6c44e74ba69e2c203691163af3ad7a66dceaa19cea024ea6df91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f448ebfa7c6c44e74ba69e2c203691163af3ad7a66dceaa19cea024ea6df91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f448ebfa7c6c44e74ba69e2c203691163af3ad7a66dceaa19cea024ea6df91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:13 compute-0 podman[235933]: 2026-01-23 18:53:13.742111845 +0000 UTC m=+0.028653438 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:53:13 compute-0 podman[235933]: 2026-01-23 18:53:13.845263486 +0000 UTC m=+0.131805099 container init 4980e3d5bae01dd277b054ab9187bee528aedb266635509ec0e02298c2f9fcbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 18:53:13 compute-0 podman[235933]: 2026-01-23 18:53:13.850871242 +0000 UTC m=+0.137412815 container start 4980e3d5bae01dd277b054ab9187bee528aedb266635509ec0e02298c2f9fcbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:53:13 compute-0 podman[235933]: 2026-01-23 18:53:13.854705846 +0000 UTC m=+0.141247419 container attach 4980e3d5bae01dd277b054ab9187bee528aedb266635509ec0e02298c2f9fcbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lumiere, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]: {
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:     "0": [
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:         {
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "devices": [
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "/dev/loop3"
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             ],
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "lv_name": "ceph_lv0",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "lv_size": "21470642176",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "name": "ceph_lv0",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "tags": {
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.cluster_name": "ceph",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.crush_device_class": "",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.encrypted": "0",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.objectstore": "bluestore",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.osd_id": "0",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.type": "block",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.vdo": "0",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.with_tpm": "0"
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             },
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "type": "block",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "vg_name": "ceph_vg0"
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:         }
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:     ],
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:     "1": [
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:         {
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "devices": [
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "/dev/loop4"
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             ],
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "lv_name": "ceph_lv1",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "lv_size": "21470642176",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "name": "ceph_lv1",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "tags": {
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.cluster_name": "ceph",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.crush_device_class": "",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.encrypted": "0",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.objectstore": "bluestore",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.osd_id": "1",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.type": "block",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.vdo": "0",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.with_tpm": "0"
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             },
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "type": "block",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "vg_name": "ceph_vg1"
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:         }
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:     ],
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:     "2": [
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:         {
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "devices": [
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "/dev/loop5"
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             ],
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "lv_name": "ceph_lv2",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "lv_size": "21470642176",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "name": "ceph_lv2",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "tags": {
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.cluster_name": "ceph",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.crush_device_class": "",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.encrypted": "0",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.objectstore": "bluestore",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.osd_id": "2",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.type": "block",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.vdo": "0",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:                 "ceph.with_tpm": "0"
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             },
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "type": "block",
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:             "vg_name": "ceph_vg2"
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:         }
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]:     ]
Jan 23 18:53:14 compute-0 flamboyant_lumiere[235951]: }
Jan 23 18:53:14 compute-0 sudo[236109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiefmyhqogulctdtwinqhzzyfvurwkoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194393.9378102-1093-5684414280643/AnsiballZ_stat.py'
Jan 23 18:53:14 compute-0 systemd[1]: libpod-4980e3d5bae01dd277b054ab9187bee528aedb266635509ec0e02298c2f9fcbf.scope: Deactivated successfully.
Jan 23 18:53:14 compute-0 conmon[235951]: conmon 4980e3d5bae01dd277b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4980e3d5bae01dd277b054ab9187bee528aedb266635509ec0e02298c2f9fcbf.scope/container/memory.events
Jan 23 18:53:14 compute-0 sudo[236109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:14 compute-0 podman[235933]: 2026-01-23 18:53:14.188638012 +0000 UTC m=+0.475179605 container died 4980e3d5bae01dd277b054ab9187bee528aedb266635509ec0e02298c2f9fcbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lumiere, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:53:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6f448ebfa7c6c44e74ba69e2c203691163af3ad7a66dceaa19cea024ea6df91-merged.mount: Deactivated successfully.
Jan 23 18:53:14 compute-0 podman[235933]: 2026-01-23 18:53:14.229513967 +0000 UTC m=+0.516055540 container remove 4980e3d5bae01dd277b054ab9187bee528aedb266635509ec0e02298c2f9fcbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lumiere, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 18:53:14 compute-0 systemd[1]: libpod-conmon-4980e3d5bae01dd277b054ab9187bee528aedb266635509ec0e02298c2f9fcbf.scope: Deactivated successfully.
Jan 23 18:53:14 compute-0 sudo[235704]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:14 compute-0 sudo[236125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:53:14 compute-0 sudo[236125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:53:14 compute-0 sudo[236125]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:14 compute-0 sudo[236150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:53:14 compute-0 python3.9[236111]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:53:14 compute-0 sudo[236150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:53:14 compute-0 sudo[236109]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:14 compute-0 podman[236258]: 2026-01-23 18:53:14.674944367 +0000 UTC m=+0.037049153 container create 0f983b16f38d45dfc3d449aaf2976b3d7e458246fad55aeb5ab61fc2adcf1d42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nash, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:53:14 compute-0 systemd[1]: Started libpod-conmon-0f983b16f38d45dfc3d449aaf2976b3d7e458246fad55aeb5ab61fc2adcf1d42.scope.
Jan 23 18:53:14 compute-0 podman[236258]: 2026-01-23 18:53:14.658879875 +0000 UTC m=+0.020984681 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:53:14 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:53:14 compute-0 podman[236258]: 2026-01-23 18:53:14.795122492 +0000 UTC m=+0.157227308 container init 0f983b16f38d45dfc3d449aaf2976b3d7e458246fad55aeb5ab61fc2adcf1d42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:53:14 compute-0 podman[236258]: 2026-01-23 18:53:14.803076614 +0000 UTC m=+0.165181420 container start 0f983b16f38d45dfc3d449aaf2976b3d7e458246fad55aeb5ab61fc2adcf1d42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nash, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:53:14 compute-0 podman[236258]: 2026-01-23 18:53:14.807392759 +0000 UTC m=+0.169497575 container attach 0f983b16f38d45dfc3d449aaf2976b3d7e458246fad55aeb5ab61fc2adcf1d42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:53:14 compute-0 systemd[1]: libpod-0f983b16f38d45dfc3d449aaf2976b3d7e458246fad55aeb5ab61fc2adcf1d42.scope: Deactivated successfully.
Jan 23 18:53:14 compute-0 nervous_nash[236299]: 167 167
Jan 23 18:53:14 compute-0 conmon[236299]: conmon 0f983b16f38d45dfc3d4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f983b16f38d45dfc3d449aaf2976b3d7e458246fad55aeb5ab61fc2adcf1d42.scope/container/memory.events
Jan 23 18:53:14 compute-0 podman[236258]: 2026-01-23 18:53:14.809707876 +0000 UTC m=+0.171812682 container died 0f983b16f38d45dfc3d449aaf2976b3d7e458246fad55aeb5ab61fc2adcf1d42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nash, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:53:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-644dc0cc903117eb40c7dbbe0c4c679d9fffc716a834f942043bde64217a7ad1-merged.mount: Deactivated successfully.
Jan 23 18:53:14 compute-0 podman[236258]: 2026-01-23 18:53:14.847501556 +0000 UTC m=+0.209606342 container remove 0f983b16f38d45dfc3d449aaf2976b3d7e458246fad55aeb5ab61fc2adcf1d42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nash, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 18:53:14 compute-0 sudo[236337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uelzkcvlufndygaxqhkvjqfrdkjuqgkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194393.9378102-1093-5684414280643/AnsiballZ_copy.py'
Jan 23 18:53:14 compute-0 sudo[236337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:14 compute-0 systemd[1]: libpod-conmon-0f983b16f38d45dfc3d449aaf2976b3d7e458246fad55aeb5ab61fc2adcf1d42.scope: Deactivated successfully.
Jan 23 18:53:15 compute-0 podman[236352]: 2026-01-23 18:53:15.017171245 +0000 UTC m=+0.041357287 container create 411a83ec47b7d67e22d1dd8874d0200d1b4d54706d4fcea826fff2e49fe796cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ellis, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 23 18:53:15 compute-0 python3.9[236345]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769194393.9378102-1093-5684414280643/.source _original_basename=.qkt4v_n5 follow=False checksum=6e2768514392b03db126d76a7ebed69988c58c74 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 23 18:53:15 compute-0 systemd[1]: Started libpod-conmon-411a83ec47b7d67e22d1dd8874d0200d1b4d54706d4fcea826fff2e49fe796cb.scope.
Jan 23 18:53:15 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:53:15 compute-0 sudo[236337]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4237d093f53e466cfce6672bc0d3da592eed2f8cf9086be175e5865b36a89570/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4237d093f53e466cfce6672bc0d3da592eed2f8cf9086be175e5865b36a89570/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4237d093f53e466cfce6672bc0d3da592eed2f8cf9086be175e5865b36a89570/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4237d093f53e466cfce6672bc0d3da592eed2f8cf9086be175e5865b36a89570/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:15 compute-0 podman[236352]: 2026-01-23 18:53:14.997545827 +0000 UTC m=+0.021731899 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:53:15 compute-0 podman[236352]: 2026-01-23 18:53:15.103326992 +0000 UTC m=+0.127513134 container init 411a83ec47b7d67e22d1dd8874d0200d1b4d54706d4fcea826fff2e49fe796cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ellis, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 23 18:53:15 compute-0 podman[236352]: 2026-01-23 18:53:15.112287409 +0000 UTC m=+0.136473451 container start 411a83ec47b7d67e22d1dd8874d0200d1b4d54706d4fcea826fff2e49fe796cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ellis, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 23 18:53:15 compute-0 podman[236352]: 2026-01-23 18:53:15.115794285 +0000 UTC m=+0.139980357 container attach 411a83ec47b7d67e22d1dd8874d0200d1b4d54706d4fcea826fff2e49fe796cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 23 18:53:15 compute-0 ceph-mon[75097]: pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:15 compute-0 python3.9[236567]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:53:15 compute-0 lvm[236620]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:53:15 compute-0 lvm[236620]: VG ceph_vg0 finished
Jan 23 18:53:15 compute-0 lvm[236623]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:53:15 compute-0 lvm[236623]: VG ceph_vg1 finished
Jan 23 18:53:15 compute-0 lvm[236628]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:53:15 compute-0 lvm[236628]: VG ceph_vg2 finished
Jan 23 18:53:15 compute-0 lvm[236629]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:53:15 compute-0 lvm[236629]: VG ceph_vg1 finished
Jan 23 18:53:15 compute-0 lvm[236641]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:53:15 compute-0 lvm[236641]: VG ceph_vg1 finished
Jan 23 18:53:15 compute-0 admiring_ellis[236369]: {}
Jan 23 18:53:16 compute-0 systemd[1]: libpod-411a83ec47b7d67e22d1dd8874d0200d1b4d54706d4fcea826fff2e49fe796cb.scope: Deactivated successfully.
Jan 23 18:53:16 compute-0 systemd[1]: libpod-411a83ec47b7d67e22d1dd8874d0200d1b4d54706d4fcea826fff2e49fe796cb.scope: Consumed 1.369s CPU time.
Jan 23 18:53:16 compute-0 podman[236352]: 2026-01-23 18:53:16.024621312 +0000 UTC m=+1.048807354 container died 411a83ec47b7d67e22d1dd8874d0200d1b4d54706d4fcea826fff2e49fe796cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 18:53:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-4237d093f53e466cfce6672bc0d3da592eed2f8cf9086be175e5865b36a89570-merged.mount: Deactivated successfully.
Jan 23 18:53:16 compute-0 podman[236352]: 2026-01-23 18:53:16.075398067 +0000 UTC m=+1.099584109 container remove 411a83ec47b7d67e22d1dd8874d0200d1b4d54706d4fcea826fff2e49fe796cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 18:53:16 compute-0 systemd[1]: libpod-conmon-411a83ec47b7d67e22d1dd8874d0200d1b4d54706d4fcea826fff2e49fe796cb.scope: Deactivated successfully.
Jan 23 18:53:16 compute-0 sudo[236150]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:53:16 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:53:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:53:16 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:53:16 compute-0 sudo[236741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:53:16 compute-0 sudo[236741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:53:16 compute-0 sudo[236741]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:16 compute-0 python3.9[236795]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:53:16 compute-0 python3.9[236916]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769194395.965502-1119-76040033892312/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:53:17 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:53:17 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:53:17 compute-0 ceph-mon[75097]: pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:17 compute-0 python3.9[237066]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 18:53:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:53:18 compute-0 python3.9[237187]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769194397.0875099-1134-71740995789526/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 18:53:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:18 compute-0 sudo[237337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovaykpwuovqkmclrrklqhsevrjmtvwgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194398.4243593-1151-18177997620181/AnsiballZ_container_config_data.py'
Jan 23 18:53:18 compute-0 sudo[237337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:19 compute-0 python3.9[237339]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 23 18:53:19 compute-0 sudo[237337]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:19 compute-0 ceph-mon[75097]: pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:19 compute-0 sudo[237489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyursvrijywkuwgpsialpaemdiowwyyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194399.3495896-1162-68551701608074/AnsiballZ_container_config_hash.py'
Jan 23 18:53:19 compute-0 sudo[237489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:20 compute-0 python3.9[237491]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 18:53:20 compute-0 sudo[237489]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:20 compute-0 sudo[237641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkummydhpejsxnwglaljhvmapyznnalr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769194400.4692938-1172-112368694100250/AnsiballZ_edpm_container_manage.py'
Jan 23 18:53:20 compute-0 sudo[237641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:21 compute-0 python3[237643]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 18:53:21 compute-0 ceph-mon[75097]: pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:21 compute-0 podman[237671]: 2026-01-23 18:53:21.980681836 +0000 UTC m=+0.052448028 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 18:53:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:53:23 compute-0 ceph-mon[75097]: pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:25 compute-0 ceph-mon[75097]: pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:27 compute-0 ceph-mon[75097]: pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:53:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:53:28
Jan 23 18:53:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:53:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:53:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'images', 'backups', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'default.rgw.control']
Jan 23 18:53:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:53:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:53:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:53:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:53:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:53:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:53:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:53:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:53:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:53:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:53:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:53:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:53:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:53:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:53:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:53:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:53:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:53:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:53:33 compute-0 ceph-mon[75097]: pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:33 compute-0 podman[237657]: 2026-01-23 18:53:33.446086161 +0000 UTC m=+12.181291418 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 23 18:53:33 compute-0 podman[237761]: 2026-01-23 18:53:33.590161848 +0000 UTC m=+0.054500668 container create 5a2393339e0654b823d90c3261576f4fe0719499dc0c42e3fbe10c0c93f50373 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=nova_compute_init, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.schema-version=1.0)
Jan 23 18:53:33 compute-0 podman[237761]: 2026-01-23 18:53:33.558738003 +0000 UTC m=+0.023076863 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 23 18:53:33 compute-0 python3[237643]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 23 18:53:33 compute-0 sudo[237641]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:34 compute-0 sudo[237950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovglqjgeqyencofwfkbhgtgxmhonaaft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194413.9107683-1180-57660075236482/AnsiballZ_stat.py'
Jan 23 18:53:34 compute-0 sudo[237950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:34 compute-0 ceph-mon[75097]: pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:34 compute-0 ceph-mon[75097]: pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:34 compute-0 python3.9[237952]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:53:34 compute-0 sudo[237950]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:35 compute-0 sudo[238104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vowxrwvqyujbmcggozjftzzbykboaitx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194414.9338746-1192-113335428991930/AnsiballZ_container_config_data.py'
Jan 23 18:53:35 compute-0 sudo[238104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:35 compute-0 ceph-mon[75097]: pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:35 compute-0 python3.9[238106]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 23 18:53:35 compute-0 sudo[238104]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:35 compute-0 sudo[238256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okhuztpxzoemzhyrbsaifdwmnblxoacw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194415.6564648-1203-91934364957/AnsiballZ_container_config_hash.py'
Jan 23 18:53:35 compute-0 sudo[238256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:36 compute-0 python3.9[238258]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 18:53:36 compute-0 sudo[238256]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:36 compute-0 sudo[238408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukbstuihjjyfiiwcfmsmysrckjdwfxwn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769194416.386123-1213-93748702069433/AnsiballZ_edpm_container_manage.py'
Jan 23 18:53:36 compute-0 sudo[238408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:36 compute-0 python3[238410]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 18:53:37 compute-0 podman[238444]: 2026-01-23 18:53:37.100396681 +0000 UTC m=+0.052845726 container create 29082451aafc29729b37267d69211584136d80e0ea0fc75736a75cbd1af6bd97 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 18:53:37 compute-0 podman[238444]: 2026-01-23 18:53:37.072078412 +0000 UTC m=+0.024527477 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 23 18:53:37 compute-0 python3[238410]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 23 18:53:37 compute-0 sudo[238408]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:37 compute-0 ceph-mon[75097]: pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:37 compute-0 sudo[238632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbokjwxbjxymjpvwjwfvzzvriwvnvyys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194417.3733654-1221-281402480800216/AnsiballZ_stat.py'
Jan 23 18:53:37 compute-0 sudo[238632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:53:37 compute-0 python3.9[238634]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:53:37 compute-0 sudo[238632]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:38 compute-0 sudo[238786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvkryxphgatpqctrnqyhjrtttesnpsom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194418.0433125-1230-30287122312373/AnsiballZ_file.py'
Jan 23 18:53:38 compute-0 sudo[238786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:38 compute-0 python3.9[238788]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:53:38 compute-0 sudo[238786]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:38 compute-0 sudo[238937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sscwcfjrjnktpoevjrlzcphobwktxbha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194418.5464044-1230-147012649908261/AnsiballZ_copy.py'
Jan 23 18:53:38 compute-0 sudo[238937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:39 compute-0 python3.9[238939]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769194418.5464044-1230-147012649908261/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 18:53:39 compute-0 sudo[238937]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:39 compute-0 sudo[239013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anntgjueksuhijotvbevrqxomeqwugkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194418.5464044-1230-147012649908261/AnsiballZ_systemd.py'
Jan 23 18:53:39 compute-0 sudo[239013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:39 compute-0 ceph-mon[75097]: pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:39 compute-0 python3.9[239015]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 18:53:39 compute-0 systemd[1]: Reloading.
Jan 23 18:53:39 compute-0 systemd-sysv-generator[239046]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:53:39 compute-0 systemd-rc-local-generator[239040]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:53:40 compute-0 sudo[239013]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:53:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:53:40 compute-0 sudo[239124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnjlxsmpzotuwfpeibfecelcmbdxhksx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194418.5464044-1230-147012649908261/AnsiballZ_systemd.py'
Jan 23 18:53:40 compute-0 sudo[239124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:40 compute-0 python3.9[239126]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 18:53:40 compute-0 systemd[1]: Reloading.
Jan 23 18:53:40 compute-0 systemd-sysv-generator[239159]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 18:53:40 compute-0 systemd-rc-local-generator[239156]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 18:53:41 compute-0 systemd[1]: Starting nova_compute container...
Jan 23 18:53:41 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:53:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d45cd8ac665a170efa4df733e0750a1f6965d4eb6199af294cf3ee6abd39a5d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d45cd8ac665a170efa4df733e0750a1f6965d4eb6199af294cf3ee6abd39a5d/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d45cd8ac665a170efa4df733e0750a1f6965d4eb6199af294cf3ee6abd39a5d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d45cd8ac665a170efa4df733e0750a1f6965d4eb6199af294cf3ee6abd39a5d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d45cd8ac665a170efa4df733e0750a1f6965d4eb6199af294cf3ee6abd39a5d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:41 compute-0 podman[239166]: 2026-01-23 18:53:41.305802552 +0000 UTC m=+0.129946703 container init 29082451aafc29729b37267d69211584136d80e0ea0fc75736a75cbd1af6bd97 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3)
Jan 23 18:53:41 compute-0 podman[239166]: 2026-01-23 18:53:41.312658819 +0000 UTC m=+0.136802950 container start 29082451aafc29729b37267d69211584136d80e0ea0fc75736a75cbd1af6bd97 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 23 18:53:41 compute-0 podman[239166]: nova_compute
Jan 23 18:53:41 compute-0 systemd[1]: Started nova_compute container.
Jan 23 18:53:41 compute-0 nova_compute[239181]: + sudo -E kolla_set_configs
Jan 23 18:53:41 compute-0 sudo[239124]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Validating config file
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Copying service configuration files
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Deleting /etc/ceph
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Creating directory /etc/ceph
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Setting permission for /etc/ceph
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Writing out command to execute
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 23 18:53:41 compute-0 nova_compute[239181]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 23 18:53:41 compute-0 ceph-mon[75097]: pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:41 compute-0 nova_compute[239181]: ++ cat /run_command
Jan 23 18:53:41 compute-0 nova_compute[239181]: + CMD=nova-compute
Jan 23 18:53:41 compute-0 nova_compute[239181]: + ARGS=
Jan 23 18:53:41 compute-0 nova_compute[239181]: + sudo kolla_copy_cacerts
Jan 23 18:53:41 compute-0 nova_compute[239181]: + [[ ! -n '' ]]
Jan 23 18:53:41 compute-0 nova_compute[239181]: + . kolla_extend_start
Jan 23 18:53:41 compute-0 nova_compute[239181]: + echo 'Running command: '\''nova-compute'\'''
Jan 23 18:53:41 compute-0 nova_compute[239181]: Running command: 'nova-compute'
Jan 23 18:53:41 compute-0 nova_compute[239181]: + umask 0022
Jan 23 18:53:41 compute-0 nova_compute[239181]: + exec nova-compute
Jan 23 18:53:42 compute-0 python3.9[239342]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:53:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:53:42 compute-0 python3.9[239493]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:53:43 compute-0 podman[239617]: 2026-01-23 18:53:43.379537648 +0000 UTC m=+0.089002628 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 18:53:43 compute-0 ceph-mon[75097]: pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:43 compute-0 python3.9[239656]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 18:53:43 compute-0 nova_compute[239181]: 2026-01-23 18:53:43.670 239185 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 23 18:53:43 compute-0 nova_compute[239181]: 2026-01-23 18:53:43.671 239185 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 23 18:53:43 compute-0 nova_compute[239181]: 2026-01-23 18:53:43.671 239185 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 23 18:53:43 compute-0 nova_compute[239181]: 2026-01-23 18:53:43.671 239185 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 23 18:53:43 compute-0 nova_compute[239181]: 2026-01-23 18:53:43.832 239185 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 18:53:43 compute-0 nova_compute[239181]: 2026-01-23 18:53:43.849 239185 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 18:53:43 compute-0 nova_compute[239181]: 2026-01-23 18:53:43.849 239185 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 23 18:53:44 compute-0 sudo[239821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugxyejnjeqlvpkqhsydsvdogdkatbwdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194423.7648754-1290-259521137657105/AnsiballZ_podman_container.py'
Jan 23 18:53:44 compute-0 sudo[239821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.414 239185 INFO nova.virt.driver [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 23 18:53:44 compute-0 python3.9[239823]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.552 239185 INFO nova.compute.provider_config [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 23 18:53:44 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 18:53:44 compute-0 ceph-mon[75097]: pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:44 compute-0 sudo[239821]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.951 239185 DEBUG oslo_concurrency.lockutils [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.952 239185 DEBUG oslo_concurrency.lockutils [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.952 239185 DEBUG oslo_concurrency.lockutils [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.952 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.953 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.953 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.953 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.953 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.953 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.953 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.954 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.954 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.954 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.954 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.954 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.955 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.955 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.955 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.955 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.955 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.956 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.956 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.956 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.956 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.956 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.956 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.957 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.957 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.957 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.957 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.957 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.957 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.958 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.958 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.958 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.958 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.958 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.958 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.958 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.959 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.959 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.959 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.959 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.959 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.959 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.960 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.960 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.960 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.960 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.960 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.961 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.961 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.961 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.961 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.961 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.961 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.962 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.962 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.962 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.962 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.962 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.962 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.962 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.963 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.963 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.963 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.963 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.963 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.963 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.963 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.964 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.964 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.964 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.964 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.964 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.964 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.965 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.965 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.965 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.965 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.965 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.965 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.965 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.966 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.966 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.966 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.966 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.966 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.967 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.967 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.967 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.967 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.967 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.968 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.968 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.968 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.968 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.968 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.968 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.969 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.969 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.969 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.969 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.969 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.969 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.970 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.970 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.970 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.970 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.970 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.971 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.971 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.971 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.971 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.971 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.971 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.972 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.972 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.972 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.972 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.972 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.972 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.973 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.973 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.973 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.973 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.973 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.973 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.973 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.974 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.974 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.974 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.974 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.974 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.975 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.975 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.975 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.975 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.975 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.975 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.976 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.976 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.976 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.976 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.976 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.976 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.977 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.977 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.977 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.977 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.977 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.977 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.978 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.978 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.978 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.978 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.978 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.978 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.979 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.979 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.979 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.979 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.979 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.980 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.980 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.980 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.980 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.980 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.981 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.981 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.981 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.981 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.982 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.982 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.982 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.982 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.982 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.983 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.983 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.983 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.983 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.983 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.984 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.984 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.984 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.984 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.984 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.985 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.985 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.985 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.985 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.985 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.986 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.986 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.986 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.986 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.987 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.987 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.987 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.987 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.987 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.987 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.988 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.988 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.988 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.988 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.988 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.988 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.989 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.989 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.989 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.989 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.989 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.990 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.990 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.990 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.990 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.990 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.991 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.991 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.991 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.991 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.991 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.991 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.992 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.992 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.992 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.992 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.992 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.992 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.993 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.993 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.993 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.993 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.993 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.993 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.993 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.994 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.994 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.994 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.994 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.994 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.994 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.995 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.995 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.995 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.995 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.995 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.995 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.995 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.996 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.996 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.996 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.996 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.996 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.996 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.996 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.997 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.997 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.997 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.997 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.997 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.997 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.997 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.998 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.998 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.998 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.998 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.998 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.999 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.999 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.999 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.999 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:44 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.999 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:44.999 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.000 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.000 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.000 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.000 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.000 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.001 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.001 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.001 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.001 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.001 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.001 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.002 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.002 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.002 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.002 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.002 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.002 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.003 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.003 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.003 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.003 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.003 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.003 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.004 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.004 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.004 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.004 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.004 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.004 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.005 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.005 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.005 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.005 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.005 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.005 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.006 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.006 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.006 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.006 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.006 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.006 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.006 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.007 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.007 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.007 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.007 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.007 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.007 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.008 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.008 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.008 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.008 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.008 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.008 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.009 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.009 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.009 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.009 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.009 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.009 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.009 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.010 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.010 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.010 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.010 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.010 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.010 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.011 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.011 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.011 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.011 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.011 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.012 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.012 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.012 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.012 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.012 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.012 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.013 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.013 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.013 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.013 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.013 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.014 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.014 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.014 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.014 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.014 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.015 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.015 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.015 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.015 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.015 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.015 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.016 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.016 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.016 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.016 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.016 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.016 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.016 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.017 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.017 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.017 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.017 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.017 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.017 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.017 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.018 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.018 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.018 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.018 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.018 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.018 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.018 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.019 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.019 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.019 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.019 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.019 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.019 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.020 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.020 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.020 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.020 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.020 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.020 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.020 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.021 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.021 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.021 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.021 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.021 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.021 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.022 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.022 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.022 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.022 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.022 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.022 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.022 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.023 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.023 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.023 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.023 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.023 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.023 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.023 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.024 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.024 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.024 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.024 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.024 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.024 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.024 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.025 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.025 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.025 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.025 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.025 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.025 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.026 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.026 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.026 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.026 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.026 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.027 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.027 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.027 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.027 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.027 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.027 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.028 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.028 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.028 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.028 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.028 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.029 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.029 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.029 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.029 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.029 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.030 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.030 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.030 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.030 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.030 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.031 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.031 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.031 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.031 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.031 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.031 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.032 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.032 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.032 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.032 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.032 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.033 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.033 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.033 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.033 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.033 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.034 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.034 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.034 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.034 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.034 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.035 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.035 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.035 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.035 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.035 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.036 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.036 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.036 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.036 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.036 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.036 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.037 239185 WARNING oslo_config.cfg [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 23 18:53:45 compute-0 nova_compute[239181]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 23 18:53:45 compute-0 nova_compute[239181]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 23 18:53:45 compute-0 nova_compute[239181]: and ``live_migration_inbound_addr`` respectively.
Jan 23 18:53:45 compute-0 nova_compute[239181]: ).  Its value may be silently ignored in the future.
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.037 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.037 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.037 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.037 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.038 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.038 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.038 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.038 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.038 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.039 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.039 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.039 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.039 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.039 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.039 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.040 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.040 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.040 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.040 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.rbd_secret_uuid        = 4da2adac-d096-5ead-9ec5-3108259ba9e4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.040 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.040 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.041 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.041 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.041 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.041 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.041 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.041 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.041 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.042 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.042 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.042 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.042 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.042 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.042 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.043 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.043 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.043 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.043 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.043 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.043 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.043 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.044 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.044 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.044 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.044 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.044 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.044 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.044 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.045 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.045 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.045 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.045 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.045 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.045 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.045 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.046 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.046 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.046 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.046 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.046 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.046 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.046 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.047 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.047 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.047 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.047 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.047 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.048 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.048 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.048 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.049 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.049 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.049 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.049 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.049 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.049 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.050 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.050 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.050 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.050 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.050 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.051 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.051 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.051 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.051 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.051 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.051 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.052 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.052 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.052 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.052 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.052 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.052 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.053 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.053 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.053 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.053 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.053 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.053 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.053 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.054 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.054 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.054 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.054 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.054 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.054 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.055 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.055 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.055 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.055 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.055 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.055 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.056 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.056 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.056 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.056 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.056 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.056 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.056 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.057 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.057 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.057 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.057 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.057 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.057 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.058 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.058 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.058 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.058 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.058 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.058 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.059 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.059 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.059 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.059 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.059 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.059 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.059 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.060 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.060 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.060 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.060 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.060 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.061 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.061 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.061 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.061 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.061 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.061 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.062 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.062 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.062 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.062 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.062 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.062 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.063 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.063 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.063 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.063 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.063 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.063 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.064 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.064 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.064 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.064 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.064 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.064 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.064 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.065 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.065 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.065 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.065 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.065 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.065 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.066 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.066 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.066 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.066 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.066 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.066 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.067 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.067 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.067 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.067 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.067 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.067 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.068 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.068 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.068 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.068 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.068 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.068 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.069 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.069 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.069 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.069 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.069 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.069 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.070 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.070 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.070 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.070 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.070 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.070 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.071 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.071 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.071 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.071 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.071 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.071 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.072 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.072 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.072 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.072 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.072 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.072 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.072 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.073 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.073 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.073 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.073 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.073 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.073 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.074 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.074 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.074 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.074 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.074 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.074 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.074 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.075 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.075 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.075 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.075 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.075 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.075 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.076 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.076 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.076 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.076 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.076 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.076 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.076 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.077 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.077 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.077 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.077 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.077 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.078 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.078 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.078 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.078 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.078 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.078 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.079 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.079 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.079 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.079 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.079 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.079 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.080 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.080 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.080 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.080 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.080 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.080 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.080 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.081 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.081 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.081 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.081 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.081 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.081 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.082 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.082 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.082 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.082 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.082 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.082 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.083 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.083 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.083 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.083 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.083 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.083 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.083 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.084 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.084 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.084 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.084 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.084 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.084 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.085 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.085 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.085 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.085 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.085 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.085 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.086 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.086 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.086 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.086 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.086 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.086 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.087 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.087 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.087 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.087 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.087 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.087 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.088 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.088 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.088 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.088 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.088 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.088 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.088 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.089 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.089 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.089 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.089 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.089 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.089 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.090 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.090 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.090 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.090 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.090 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.090 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.091 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.091 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.091 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.091 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.091 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.091 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.092 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.092 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.092 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.092 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.092 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.092 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.093 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.093 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.093 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.093 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.093 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.093 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.094 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.094 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.094 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.094 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.094 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.094 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.094 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.095 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.095 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.095 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.095 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.095 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.095 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.096 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.096 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.096 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.096 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.096 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.096 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.096 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.097 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.097 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.097 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.097 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.097 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.097 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.098 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.098 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.098 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.098 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.098 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.098 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.098 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.099 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.099 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.099 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.099 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.099 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.099 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.100 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.100 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.100 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.100 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.100 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.100 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.101 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.101 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.101 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.101 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.101 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.101 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.101 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.102 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.102 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.102 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.102 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.102 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.102 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.103 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.103 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.103 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.103 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.103 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.103 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.104 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.104 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.104 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.104 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.104 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.104 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.104 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.105 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.105 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.105 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.105 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.105 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.105 239185 DEBUG oslo_service.service [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.107 239185 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 23 18:53:45 compute-0 sudo[239995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlzdzqfirsicskfafexsrcoyuffdpikc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194424.8410308-1298-11729056112422/AnsiballZ_systemd.py'
Jan 23 18:53:45 compute-0 sudo[239995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.468 239185 DEBUG nova.virt.libvirt.host [None req-3b409377-3d36-4e56-9e0f-3e173d6e7077 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.469 239185 DEBUG nova.virt.libvirt.host [None req-3b409377-3d36-4e56-9e0f-3e173d6e7077 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.470 239185 DEBUG nova.virt.libvirt.host [None req-3b409377-3d36-4e56-9e0f-3e173d6e7077 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.470 239185 DEBUG nova.virt.libvirt.host [None req-3b409377-3d36-4e56-9e0f-3e173d6e7077 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 23 18:53:45 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 23 18:53:45 compute-0 python3.9[239997]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 18:53:45 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 23 18:53:45 compute-0 systemd[1]: Stopping nova_compute container...
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.548 239185 DEBUG nova.virt.libvirt.host [None req-3b409377-3d36-4e56-9e0f-3e173d6e7077 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f49c642de80> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.554 239185 DEBUG nova.virt.libvirt.host [None req-3b409377-3d36-4e56-9e0f-3e173d6e7077 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f49c642de80> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.555 239185 INFO nova.virt.libvirt.driver [None req-3b409377-3d36-4e56-9e0f-3e173d6e7077 - - - - - -] Connection event '1' reason 'None'
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.605 239185 DEBUG oslo_concurrency.lockutils [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.606 239185 DEBUG oslo_concurrency.lockutils [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 18:53:45 compute-0 nova_compute[239181]: 2026-01-23 18:53:45.606 239185 DEBUG oslo_concurrency.lockutils [None req-91e7a803-0ebf-4a15-a8a1-84429706a5e9 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 18:53:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:46 compute-0 virtqemud[240019]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 23 18:53:46 compute-0 virtqemud[240019]: hostname: compute-0
Jan 23 18:53:46 compute-0 virtqemud[240019]: End of file while reading data: Input/output error
Jan 23 18:53:46 compute-0 systemd[1]: libpod-29082451aafc29729b37267d69211584136d80e0ea0fc75736a75cbd1af6bd97.scope: Deactivated successfully.
Jan 23 18:53:46 compute-0 systemd[1]: libpod-29082451aafc29729b37267d69211584136d80e0ea0fc75736a75cbd1af6bd97.scope: Consumed 3.335s CPU time.
Jan 23 18:53:46 compute-0 podman[240041]: 2026-01-23 18:53:46.82873604 +0000 UTC m=+1.268401269 container died 29082451aafc29729b37267d69211584136d80e0ea0fc75736a75cbd1af6bd97 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 23 18:53:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-29082451aafc29729b37267d69211584136d80e0ea0fc75736a75cbd1af6bd97-userdata-shm.mount: Deactivated successfully.
Jan 23 18:53:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d45cd8ac665a170efa4df733e0750a1f6965d4eb6199af294cf3ee6abd39a5d-merged.mount: Deactivated successfully.
Jan 23 18:53:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:53:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:48 compute-0 ceph-mon[75097]: pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:48 compute-0 podman[240041]: 2026-01-23 18:53:48.685212239 +0000 UTC m=+3.124877438 container cleanup 29082451aafc29729b37267d69211584136d80e0ea0fc75736a75cbd1af6bd97 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 18:53:48 compute-0 podman[240041]: nova_compute
Jan 23 18:53:48 compute-0 podman[240090]: nova_compute
Jan 23 18:53:48 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 23 18:53:48 compute-0 systemd[1]: Stopped nova_compute container.
Jan 23 18:53:48 compute-0 systemd[1]: Starting nova_compute container...
Jan 23 18:53:48 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d45cd8ac665a170efa4df733e0750a1f6965d4eb6199af294cf3ee6abd39a5d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d45cd8ac665a170efa4df733e0750a1f6965d4eb6199af294cf3ee6abd39a5d/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d45cd8ac665a170efa4df733e0750a1f6965d4eb6199af294cf3ee6abd39a5d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d45cd8ac665a170efa4df733e0750a1f6965d4eb6199af294cf3ee6abd39a5d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d45cd8ac665a170efa4df733e0750a1f6965d4eb6199af294cf3ee6abd39a5d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:48 compute-0 podman[240103]: 2026-01-23 18:53:48.865895934 +0000 UTC m=+0.094490814 container init 29082451aafc29729b37267d69211584136d80e0ea0fc75736a75cbd1af6bd97 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 23 18:53:48 compute-0 podman[240103]: 2026-01-23 18:53:48.874606696 +0000 UTC m=+0.103201576 container start 29082451aafc29729b37267d69211584136d80e0ea0fc75736a75cbd1af6bd97 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:53:48 compute-0 podman[240103]: nova_compute
Jan 23 18:53:48 compute-0 nova_compute[240119]: + sudo -E kolla_set_configs
Jan 23 18:53:48 compute-0 systemd[1]: Started nova_compute container.
Jan 23 18:53:48 compute-0 sudo[239995]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Validating config file
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Copying service configuration files
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Deleting /etc/ceph
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Creating directory /etc/ceph
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Setting permission for /etc/ceph
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Writing out command to execute
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 23 18:53:48 compute-0 nova_compute[240119]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 23 18:53:48 compute-0 nova_compute[240119]: ++ cat /run_command
Jan 23 18:53:48 compute-0 nova_compute[240119]: + CMD=nova-compute
Jan 23 18:53:48 compute-0 nova_compute[240119]: + ARGS=
Jan 23 18:53:48 compute-0 nova_compute[240119]: + sudo kolla_copy_cacerts
Jan 23 18:53:48 compute-0 nova_compute[240119]: Running command: 'nova-compute'
Jan 23 18:53:48 compute-0 nova_compute[240119]: + [[ ! -n '' ]]
Jan 23 18:53:48 compute-0 nova_compute[240119]: + . kolla_extend_start
Jan 23 18:53:48 compute-0 nova_compute[240119]: + echo 'Running command: '\''nova-compute'\'''
Jan 23 18:53:48 compute-0 nova_compute[240119]: + umask 0022
Jan 23 18:53:48 compute-0 nova_compute[240119]: + exec nova-compute
Jan 23 18:53:49 compute-0 sudo[240280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-semujwwekgvcnxtecwpncnmoodvdluyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769194429.101812-1307-144813029253717/AnsiballZ_podman_container.py'
Jan 23 18:53:49 compute-0 sudo[240280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 18:53:49 compute-0 python3.9[240282]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 23 18:53:49 compute-0 ceph-mon[75097]: pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:49 compute-0 systemd[1]: Started libpod-conmon-5a2393339e0654b823d90c3261576f4fe0719499dc0c42e3fbe10c0c93f50373.scope.
Jan 23 18:53:49 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579707ed4e038db4d20dcc9ce94cde0610d3233caf38443672b815d7db7bee6c/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579707ed4e038db4d20dcc9ce94cde0610d3233caf38443672b815d7db7bee6c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579707ed4e038db4d20dcc9ce94cde0610d3233caf38443672b815d7db7bee6c/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 23 18:53:49 compute-0 podman[240308]: 2026-01-23 18:53:49.823414829 +0000 UTC m=+0.117470891 container init 5a2393339e0654b823d90c3261576f4fe0719499dc0c42e3fbe10c0c93f50373 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 18:53:49 compute-0 podman[240308]: 2026-01-23 18:53:49.829986629 +0000 UTC m=+0.124042661 container start 5a2393339e0654b823d90c3261576f4fe0719499dc0c42e3fbe10c0c93f50373 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3)
Jan 23 18:53:49 compute-0 python3.9[240282]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 23 18:53:49 compute-0 nova_compute_init[240329]: INFO:nova_statedir:Applying nova statedir ownership
Jan 23 18:53:49 compute-0 nova_compute_init[240329]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 23 18:53:49 compute-0 nova_compute_init[240329]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 23 18:53:49 compute-0 nova_compute_init[240329]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 23 18:53:49 compute-0 nova_compute_init[240329]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 23 18:53:49 compute-0 nova_compute_init[240329]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 23 18:53:49 compute-0 nova_compute_init[240329]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 23 18:53:49 compute-0 nova_compute_init[240329]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 23 18:53:49 compute-0 nova_compute_init[240329]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 23 18:53:49 compute-0 nova_compute_init[240329]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 23 18:53:49 compute-0 nova_compute_init[240329]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 23 18:53:49 compute-0 nova_compute_init[240329]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 23 18:53:49 compute-0 nova_compute_init[240329]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 23 18:53:49 compute-0 nova_compute_init[240329]: INFO:nova_statedir:Nova statedir ownership complete
Jan 23 18:53:49 compute-0 systemd[1]: libpod-5a2393339e0654b823d90c3261576f4fe0719499dc0c42e3fbe10c0c93f50373.scope: Deactivated successfully.
Jan 23 18:53:49 compute-0 podman[240343]: 2026-01-23 18:53:49.924066042 +0000 UTC m=+0.025908770 container died 5a2393339e0654b823d90c3261576f4fe0719499dc0c42e3fbe10c0c93f50373 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 18:53:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a2393339e0654b823d90c3261576f4fe0719499dc0c42e3fbe10c0c93f50373-userdata-shm.mount: Deactivated successfully.
Jan 23 18:53:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-579707ed4e038db4d20dcc9ce94cde0610d3233caf38443672b815d7db7bee6c-merged.mount: Deactivated successfully.
Jan 23 18:53:49 compute-0 podman[240343]: 2026-01-23 18:53:49.965127128 +0000 UTC m=+0.066969826 container cleanup 5a2393339e0654b823d90c3261576f4fe0719499dc0c42e3fbe10c0c93f50373 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3)
Jan 23 18:53:49 compute-0 systemd[1]: libpod-conmon-5a2393339e0654b823d90c3261576f4fe0719499dc0c42e3fbe10c0c93f50373.scope: Deactivated successfully.
Jan 23 18:53:49 compute-0 sudo[240280]: pam_unix(sudo:session): session closed for user root
Jan 23 18:53:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:50 compute-0 sshd-session[215782]: Connection closed by 192.168.122.30 port 59642
Jan 23 18:53:50 compute-0 sshd-session[215779]: pam_unix(sshd:session): session closed for user zuul
Jan 23 18:53:50 compute-0 systemd-logind[792]: Session 49 logged out. Waiting for processes to exit.
Jan 23 18:53:50 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Jan 23 18:53:50 compute-0 systemd[1]: session-49.scope: Consumed 2min 1.024s CPU time.
Jan 23 18:53:50 compute-0 systemd-logind[792]: Removed session 49.
Jan 23 18:53:50 compute-0 ceph-mon[75097]: pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.148 240123 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.148 240123 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.149 240123 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.149 240123 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.316 240123 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.338 240123 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.338 240123 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.789 240123 INFO nova.virt.driver [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.914 240123 INFO nova.compute.provider_config [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.968 240123 DEBUG oslo_concurrency.lockutils [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.969 240123 DEBUG oslo_concurrency.lockutils [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.969 240123 DEBUG oslo_concurrency.lockutils [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.969 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.969 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.970 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.970 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.970 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.970 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.970 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.970 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.971 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.971 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.971 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.971 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.971 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.971 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.971 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.972 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.972 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.972 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.972 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.972 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.973 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.973 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.973 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.973 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.973 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.973 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.973 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.974 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.974 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.974 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.974 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.974 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.974 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.975 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.975 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.975 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.975 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.975 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.975 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.976 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.976 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.976 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.976 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.976 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.976 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.976 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.977 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.977 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.977 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.977 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.977 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.977 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.978 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.978 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.978 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.978 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.978 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.978 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.978 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.979 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.979 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.979 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.979 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.979 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.979 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.979 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.980 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.980 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.980 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.980 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.980 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.980 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.981 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.981 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.981 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.981 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.981 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.981 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.981 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.982 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.982 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.982 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.982 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.982 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.982 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.982 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.983 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.983 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.983 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.983 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.983 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.983 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.983 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.984 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.984 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.984 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.984 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.984 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.984 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.984 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.985 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.985 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.985 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.985 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.985 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.985 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.986 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.986 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.986 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.986 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.986 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.986 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.986 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.987 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.987 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.987 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.987 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.987 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.987 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.987 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.988 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.988 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.988 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.988 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.988 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.988 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.988 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.989 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.989 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.989 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.989 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.989 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.989 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.989 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.990 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.990 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.990 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.990 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.990 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.990 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.990 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.991 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.991 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.991 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.991 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.991 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.991 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.991 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.992 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.992 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.992 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.992 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.992 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.992 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.992 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.993 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.993 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.993 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.993 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.993 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.993 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.993 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.994 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.994 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.994 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.994 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.994 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.994 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.994 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.995 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.995 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.995 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.995 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.995 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.995 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.995 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.996 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.996 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.996 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.996 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.996 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.996 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.997 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.997 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.997 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.997 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.997 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.997 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.997 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.997 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.998 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.998 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.998 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.998 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.998 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.999 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.999 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.999 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.999 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.999 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:51 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.999 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:51.999 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.000 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.000 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.000 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.000 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.000 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.000 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.000 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.001 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.001 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.001 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.001 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.001 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.001 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.001 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.002 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.002 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.002 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.002 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.002 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.002 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.002 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.003 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.003 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.003 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.003 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.003 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.003 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.004 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.004 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.004 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.004 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.004 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.004 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.005 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.005 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.005 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.005 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.005 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.005 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.006 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.006 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.006 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.006 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.006 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.006 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.007 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.007 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.007 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.007 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.007 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.007 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.008 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.008 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.008 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.008 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.008 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.009 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.009 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.009 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.009 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.009 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.009 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.010 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.010 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.010 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.010 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.010 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.010 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.011 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.011 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.011 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.011 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.011 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.011 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.011 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.011 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.012 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.012 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.012 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.012 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.012 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.012 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.012 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.013 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.013 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.013 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.013 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.013 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.013 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.014 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.014 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.014 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.014 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.014 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.014 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.015 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.015 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.015 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.015 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.015 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.015 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.015 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.016 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.016 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.016 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.016 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.016 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.016 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.017 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.017 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.017 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.017 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.018 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.018 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.018 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.018 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.018 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.018 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.019 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.019 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.019 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.019 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.019 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.019 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.019 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.020 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.020 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.020 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.020 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.020 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.020 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.021 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.021 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.021 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.021 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.021 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.022 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.022 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.022 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.022 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.022 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.023 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.023 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.023 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.023 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.023 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.023 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.023 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.023 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.024 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.024 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.024 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.024 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.025 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.025 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.025 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.025 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.025 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.025 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.026 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.026 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.026 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.026 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.026 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.026 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.026 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.027 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.027 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.027 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.027 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.027 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.027 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.027 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.028 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.028 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.028 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.028 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.028 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.028 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.029 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.029 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.029 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.029 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.029 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.029 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.029 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.030 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.030 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.030 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.030 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.030 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.030 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.031 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.031 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.031 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.031 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.031 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.031 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.032 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.032 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.032 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.032 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.032 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.033 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.033 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.033 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.033 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.033 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.033 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.033 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.034 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.034 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.034 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.034 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.034 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.035 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.035 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.035 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.035 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.035 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.036 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.036 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.036 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.036 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.036 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.036 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.037 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.037 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.037 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.037 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.037 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.037 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.037 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.038 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.038 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.038 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.038 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.038 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.038 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.039 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.039 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.039 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.039 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.040 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.040 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.040 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.040 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.040 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.040 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.041 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.041 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.041 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.041 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.042 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.042 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.042 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.042 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.042 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.043 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.043 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.043 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.043 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.043 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.044 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.044 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.044 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.044 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.044 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.045 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.045 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.045 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.045 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.046 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.046 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.046 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.046 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.046 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.047 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.047 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.047 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.047 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.047 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.048 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.048 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.048 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.048 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.048 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.049 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.049 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.049 240123 WARNING oslo_config.cfg [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 23 18:53:52 compute-0 nova_compute[240119]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 23 18:53:52 compute-0 nova_compute[240119]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 23 18:53:52 compute-0 nova_compute[240119]: and ``live_migration_inbound_addr`` respectively.
Jan 23 18:53:52 compute-0 nova_compute[240119]: ).  Its value may be silently ignored in the future.
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.049 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.050 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.050 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.050 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.050 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.051 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.051 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.051 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.051 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.051 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.052 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.052 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.052 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.052 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.052 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.053 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.053 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.053 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.053 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.rbd_secret_uuid        = 4da2adac-d096-5ead-9ec5-3108259ba9e4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.053 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.054 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.054 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.054 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.054 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.054 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.055 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.055 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.055 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.055 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.056 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.056 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.056 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.056 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.056 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.057 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.057 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.057 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.057 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.057 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.058 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.058 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.058 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.058 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.058 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.059 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.059 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.059 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.059 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.059 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.060 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.060 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.060 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.060 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.061 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.061 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.061 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.061 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.061 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.062 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.062 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.062 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.062 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.062 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.063 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.063 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.063 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.063 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.063 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.063 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.064 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.064 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.064 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.064 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.064 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.065 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.065 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.065 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.065 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.065 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.066 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.066 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.066 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.066 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.066 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.067 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.067 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.067 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.067 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.067 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.068 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.068 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.068 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.068 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.068 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.069 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.069 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.069 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.069 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.069 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.070 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.070 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.070 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.070 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.070 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.071 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.071 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.071 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.071 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.071 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.072 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.072 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.072 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.072 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.072 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.073 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.073 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.073 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.073 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.073 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.073 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.074 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.074 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.074 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.074 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.074 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.075 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.075 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.075 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.075 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.076 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.076 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.076 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.076 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.076 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.076 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.077 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.077 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.077 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.077 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.078 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.078 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.078 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.078 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.079 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.079 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.079 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.079 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.079 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.080 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.080 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.080 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.080 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.080 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.081 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.081 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.081 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.081 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.081 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.082 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.082 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.082 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.082 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.082 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.083 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.083 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.083 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.083 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.083 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.084 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.084 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.084 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.084 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.084 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.084 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.085 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.085 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.085 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.085 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.085 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.086 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.086 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.086 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.086 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.087 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.087 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.087 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.087 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.087 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.088 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.088 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.088 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.088 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.088 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.089 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.089 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.089 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.089 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.089 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.090 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.090 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.090 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.090 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.091 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.091 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.091 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.091 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.091 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.092 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.092 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.092 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.092 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.092 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.093 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.093 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.093 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.093 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.093 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.094 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.094 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.094 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.094 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.094 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.095 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.095 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.095 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.095 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.095 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.096 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.096 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.096 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.096 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.096 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.096 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.097 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.097 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.097 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.097 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.097 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.098 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.098 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.098 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.098 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.098 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.099 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.099 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.099 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.099 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.099 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.100 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.100 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.100 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.100 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.101 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.101 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.101 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.101 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.102 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.102 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.102 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.102 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.102 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.103 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.103 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.103 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.103 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.103 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.104 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.104 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.104 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.104 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.104 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.104 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.105 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.105 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.105 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.105 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.105 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.106 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.106 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.106 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.106 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.106 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.107 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.107 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.107 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.107 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.107 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.108 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.108 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.108 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.108 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.109 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.109 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.109 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.109 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.109 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.110 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.110 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.110 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.110 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.110 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.111 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.111 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.111 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.111 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.111 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.112 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.112 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.112 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.112 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.112 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.113 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.113 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.113 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.113 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.113 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.114 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.114 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.114 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.114 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.114 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.115 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.115 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.115 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.115 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.115 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.116 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.116 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.116 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.116 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.116 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.117 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.117 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.117 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.117 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.117 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.118 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.118 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.118 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.118 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.119 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.119 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.119 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.119 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.119 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.120 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.120 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.120 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.120 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.120 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.121 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.121 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.121 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.121 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.121 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.121 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.122 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.122 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.122 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.122 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.122 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.123 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.123 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.123 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.123 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.123 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.124 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.124 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.124 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.124 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.124 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.125 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.125 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.125 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.125 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.125 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.126 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.126 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.126 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.126 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.126 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.127 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.127 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.127 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.127 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.127 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.128 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.128 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.128 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.128 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.128 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.129 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.129 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.129 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.129 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.129 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.129 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.130 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.130 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.130 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.130 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.131 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.131 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.131 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.131 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.131 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.132 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.132 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.132 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.132 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.132 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.132 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.133 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.133 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.133 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.133 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.133 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.134 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.134 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.134 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.134 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.134 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.135 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.135 240123 DEBUG oslo_service.service [None req-b601cb95-6f7f-4533-aff5-98a918a5a1b7 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.136 240123 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 23 18:53:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.327 240123 DEBUG nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.328 240123 DEBUG nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.328 240123 DEBUG nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.328 240123 DEBUG nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.356 240123 DEBUG nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f3c6d7ebf40> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.359 240123 DEBUG nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f3c6d7ebf40> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.360 240123 INFO nova.virt.libvirt.driver [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Connection event '1' reason 'None'
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.365 240123 INFO nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Libvirt host capabilities <capabilities>
Jan 23 18:53:52 compute-0 nova_compute[240119]: 
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <host>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <uuid>3ba9981f-5b2a-4718-88b1-acd426df24a5</uuid>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <cpu>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <arch>x86_64</arch>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model>EPYC-Rome-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <vendor>AMD</vendor>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <microcode version='16777317'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <signature family='23' model='49' stepping='0'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='x2apic'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='tsc-deadline'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='osxsave'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='hypervisor'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='tsc_adjust'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='spec-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='stibp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='arch-capabilities'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='ssbd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='cmp_legacy'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='topoext'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='virt-ssbd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='lbrv'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='tsc-scale'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='vmcb-clean'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='pause-filter'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='pfthreshold'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='svme-addr-chk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='rdctl-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='skip-l1dfl-vmentry'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='mds-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature name='pschange-mc-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <pages unit='KiB' size='4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <pages unit='KiB' size='2048'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <pages unit='KiB' size='1048576'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </cpu>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <power_management>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <suspend_mem/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </power_management>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <iommu support='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <migration_features>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <live/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <uri_transports>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <uri_transport>tcp</uri_transport>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <uri_transport>rdma</uri_transport>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </uri_transports>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </migration_features>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <topology>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <cells num='1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <cell id='0'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:           <memory unit='KiB'>7864308</memory>
Jan 23 18:53:52 compute-0 nova_compute[240119]:           <pages unit='KiB' size='4'>1966077</pages>
Jan 23 18:53:52 compute-0 nova_compute[240119]:           <pages unit='KiB' size='2048'>0</pages>
Jan 23 18:53:52 compute-0 nova_compute[240119]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 23 18:53:52 compute-0 nova_compute[240119]:           <distances>
Jan 23 18:53:52 compute-0 nova_compute[240119]:             <sibling id='0' value='10'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:           </distances>
Jan 23 18:53:52 compute-0 nova_compute[240119]:           <cpus num='8'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:           </cpus>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         </cell>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </cells>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </topology>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <cache>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </cache>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <secmodel>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model>selinux</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <doi>0</doi>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </secmodel>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <secmodel>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model>dac</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <doi>0</doi>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </secmodel>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </host>
Jan 23 18:53:52 compute-0 nova_compute[240119]: 
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <guest>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <os_type>hvm</os_type>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <arch name='i686'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <wordsize>32</wordsize>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <domain type='qemu'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <domain type='kvm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </arch>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <features>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <pae/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <nonpae/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <acpi default='on' toggle='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <apic default='on' toggle='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <cpuselection/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <deviceboot/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <disksnapshot default='on' toggle='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <externalSnapshot/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </features>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </guest>
Jan 23 18:53:52 compute-0 nova_compute[240119]: 
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <guest>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <os_type>hvm</os_type>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <arch name='x86_64'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <wordsize>64</wordsize>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <domain type='qemu'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <domain type='kvm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </arch>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <features>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <acpi default='on' toggle='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <apic default='on' toggle='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <cpuselection/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <deviceboot/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <disksnapshot default='on' toggle='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <externalSnapshot/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </features>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </guest>
Jan 23 18:53:52 compute-0 nova_compute[240119]: 
Jan 23 18:53:52 compute-0 nova_compute[240119]: </capabilities>
Jan 23 18:53:52 compute-0 nova_compute[240119]: 
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.372 240123 DEBUG nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.393 240123 DEBUG nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 23 18:53:52 compute-0 nova_compute[240119]: <domainCapabilities>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <domain>kvm</domain>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <arch>i686</arch>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <vcpu max='4096'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <iothreads supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <os supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <enum name='firmware'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <loader supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>rom</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pflash</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='readonly'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>yes</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>no</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='secure'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>no</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </loader>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </os>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <cpu>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <mode name='host-passthrough' supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='hostPassthroughMigratable'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>on</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>off</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </mode>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <mode name='maximum' supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='maximumMigratable'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>on</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>off</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </mode>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <mode name='host-model' supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <vendor>AMD</vendor>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='x2apic'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='hypervisor'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='stibp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='ssbd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='overflow-recov'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='succor'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='lbrv'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='tsc-scale'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='flushbyasid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='pause-filter'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='pfthreshold'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='disable' name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </mode>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <mode name='custom' supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-noTSX'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='ClearwaterForest'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ddpd-u'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='intel-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ipred-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='lam'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rrsba-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sha512'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sm3'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sm4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='ClearwaterForest-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ddpd-u'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='intel-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ipred-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='lam'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rrsba-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sha512'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sm3'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sm4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cooperlake'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cooperlake-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cooperlake-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Denverton'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mpx'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Denverton-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mpx'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Denverton-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Denverton-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Dhyana-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Genoa'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fs-gs-base-ns'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='perfmon-v2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Milan'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Milan-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Milan-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Milan-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Rome'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Rome-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Rome-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Rome-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Turin'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vp2intersect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fs-gs-base-ns'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibpb-brtype'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='perfmon-v2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbpb'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='srso-user-kernel-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Turin-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vp2intersect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fs-gs-base-ns'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibpb-brtype'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='perfmon-v2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbpb'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='srso-user-kernel-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-v5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='GraniteRapids'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='GraniteRapids-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='GraniteRapids-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-128'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-256'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-512'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='GraniteRapids-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-128'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-256'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-512'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-noTSX'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v6'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v7'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='IvyBridge'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='IvyBridge-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='IvyBridge-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='IvyBridge-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='KnightsMill'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-4fmaps'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-4vnniw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512er'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512pf'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='KnightsMill-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-4fmaps'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-4vnniw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512er'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512pf'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Opteron_G4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fma4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xop'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Opteron_G4-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fma4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xop'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Opteron_G5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fma4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tbm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xop'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Opteron_G5-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fma4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tbm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xop'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 podman[240419]: 2026-01-23 18:53:52.427776657 +0000 UTC m=+0.057430475 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SierraForest'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SierraForest-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SierraForest-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='intel-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ipred-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='lam'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rrsba-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SierraForest-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='intel-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ipred-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='lam'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rrsba-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='core-capability'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mpx'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='split-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='core-capability'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mpx'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='split-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='core-capability'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='split-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='core-capability'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='split-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='athlon'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnow'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnowext'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='athlon-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnow'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnowext'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='core2duo'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='core2duo-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='coreduo'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='coreduo-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='n270'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='n270-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='phenom'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnow'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnowext'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='phenom-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnow'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnowext'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </mode>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </cpu>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <memoryBacking supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <enum name='sourceType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>file</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>anonymous</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>memfd</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </memoryBacking>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <devices>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <disk supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='diskDevice'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>disk</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>cdrom</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>floppy</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>lun</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='bus'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>fdc</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>scsi</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>usb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>sata</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio-transitional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio-non-transitional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </disk>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <graphics supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vnc</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>egl-headless</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>dbus</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </graphics>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <video supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='modelType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vga</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>cirrus</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>none</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>bochs</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>ramfb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </video>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <hostdev supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='mode'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>subsystem</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='startupPolicy'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>default</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>mandatory</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>requisite</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>optional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='subsysType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>usb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pci</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>scsi</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='capsType'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='pciBackend'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </hostdev>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <rng supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio-transitional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio-non-transitional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendModel'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>random</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>egd</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>builtin</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </rng>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <filesystem supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='driverType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>path</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>handle</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtiofs</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </filesystem>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <tpm supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>tpm-tis</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>tpm-crb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendModel'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>emulator</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>external</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendVersion'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>2.0</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </tpm>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <redirdev supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='bus'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>usb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </redirdev>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <channel supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pty</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>unix</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </channel>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <crypto supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>qemu</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendModel'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>builtin</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </crypto>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <interface supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>default</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>passt</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </interface>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <panic supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>isa</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>hyperv</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </panic>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <console supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>null</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vc</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pty</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>dev</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>file</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pipe</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>stdio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>udp</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>tcp</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>unix</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>qemu-vdagent</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>dbus</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </console>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </devices>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <features>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <gic supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <vmcoreinfo supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <genid supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <backingStoreInput supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <backup supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <async-teardown supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <s390-pv supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <ps2 supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <tdx supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <sev supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <sgx supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <hyperv supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='features'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>relaxed</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vapic</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>spinlocks</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vpindex</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>runtime</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>synic</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>stimer</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>reset</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vendor_id</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>frequencies</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>reenlightenment</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>tlbflush</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>ipi</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>avic</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>emsr_bitmap</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>xmm_input</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <defaults>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <spinlocks>4095</spinlocks>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <stimer_direct>on</stimer_direct>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </defaults>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </hyperv>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <launchSecurity supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </features>
Jan 23 18:53:52 compute-0 nova_compute[240119]: </domainCapabilities>
Jan 23 18:53:52 compute-0 nova_compute[240119]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.400 240123 DEBUG nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 23 18:53:52 compute-0 nova_compute[240119]: <domainCapabilities>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <domain>kvm</domain>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <arch>i686</arch>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <vcpu max='240'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <iothreads supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <os supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <enum name='firmware'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <loader supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>rom</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pflash</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='readonly'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>yes</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>no</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='secure'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>no</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </loader>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </os>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <cpu>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <mode name='host-passthrough' supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='hostPassthroughMigratable'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>on</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>off</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </mode>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <mode name='maximum' supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='maximumMigratable'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>on</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>off</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </mode>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <mode name='host-model' supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <vendor>AMD</vendor>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='x2apic'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='hypervisor'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='stibp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='ssbd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='overflow-recov'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='succor'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='lbrv'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='tsc-scale'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='flushbyasid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='pause-filter'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='pfthreshold'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='disable' name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </mode>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <mode name='custom' supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-noTSX'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='ClearwaterForest'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ddpd-u'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='intel-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ipred-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='lam'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rrsba-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sha512'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sm3'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sm4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='ClearwaterForest-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ddpd-u'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='intel-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ipred-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='lam'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rrsba-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sha512'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sm3'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sm4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cooperlake'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cooperlake-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cooperlake-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Denverton'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mpx'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Denverton-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mpx'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Denverton-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Denverton-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Dhyana-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Genoa'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fs-gs-base-ns'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='perfmon-v2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Milan'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Milan-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Milan-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Milan-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Rome'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Rome-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Rome-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Rome-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Turin'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vp2intersect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fs-gs-base-ns'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibpb-brtype'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='perfmon-v2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbpb'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='srso-user-kernel-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Turin-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vp2intersect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fs-gs-base-ns'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibpb-brtype'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='perfmon-v2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbpb'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='srso-user-kernel-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-v5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='GraniteRapids'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='GraniteRapids-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='GraniteRapids-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-128'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-256'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-512'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='GraniteRapids-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-128'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-256'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-512'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-noTSX'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v6'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v7'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='IvyBridge'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='IvyBridge-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='IvyBridge-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='IvyBridge-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='KnightsMill'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-4fmaps'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-4vnniw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512er'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512pf'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='KnightsMill-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-4fmaps'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-4vnniw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512er'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512pf'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Opteron_G4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fma4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xop'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Opteron_G4-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fma4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xop'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Opteron_G5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fma4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tbm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xop'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Opteron_G5-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fma4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tbm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xop'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SierraForest'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SierraForest-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SierraForest-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='intel-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ipred-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='lam'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rrsba-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SierraForest-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='intel-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ipred-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='lam'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rrsba-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='core-capability'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mpx'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='split-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='core-capability'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mpx'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='split-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='core-capability'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='split-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='core-capability'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='split-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='athlon'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnow'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnowext'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='athlon-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnow'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnowext'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='core2duo'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='core2duo-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='coreduo'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='coreduo-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='n270'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='n270-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='phenom'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnow'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnowext'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='phenom-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnow'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnowext'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </mode>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </cpu>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <memoryBacking supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <enum name='sourceType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>file</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>anonymous</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>memfd</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </memoryBacking>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <devices>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <disk supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='diskDevice'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>disk</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>cdrom</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>floppy</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>lun</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='bus'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>ide</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>fdc</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>scsi</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>usb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>sata</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio-transitional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio-non-transitional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </disk>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <graphics supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vnc</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>egl-headless</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>dbus</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </graphics>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <video supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='modelType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vga</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>cirrus</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>none</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>bochs</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>ramfb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </video>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <hostdev supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='mode'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>subsystem</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='startupPolicy'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>default</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>mandatory</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>requisite</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>optional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='subsysType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>usb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pci</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>scsi</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='capsType'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='pciBackend'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </hostdev>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <rng supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio-transitional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio-non-transitional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendModel'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>random</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>egd</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>builtin</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </rng>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <filesystem supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='driverType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>path</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>handle</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtiofs</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </filesystem>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <tpm supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>tpm-tis</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>tpm-crb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendModel'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>emulator</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>external</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendVersion'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>2.0</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </tpm>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <redirdev supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='bus'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>usb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </redirdev>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <channel supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pty</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>unix</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </channel>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <crypto supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>qemu</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendModel'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>builtin</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </crypto>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <interface supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>default</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>passt</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </interface>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <panic supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>isa</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>hyperv</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </panic>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <console supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>null</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vc</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pty</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>dev</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>file</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pipe</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>stdio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>udp</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>tcp</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>unix</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>qemu-vdagent</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>dbus</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </console>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </devices>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <features>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <gic supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <vmcoreinfo supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <genid supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <backingStoreInput supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <backup supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <async-teardown supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <s390-pv supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <ps2 supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <tdx supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <sev supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <sgx supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <hyperv supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='features'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>relaxed</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vapic</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>spinlocks</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vpindex</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>runtime</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>synic</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>stimer</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>reset</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vendor_id</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>frequencies</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>reenlightenment</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>tlbflush</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>ipi</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>avic</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>emsr_bitmap</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>xmm_input</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <defaults>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <spinlocks>4095</spinlocks>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <stimer_direct>on</stimer_direct>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </defaults>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </hyperv>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <launchSecurity supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </features>
Jan 23 18:53:52 compute-0 nova_compute[240119]: </domainCapabilities>
Jan 23 18:53:52 compute-0 nova_compute[240119]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.458 240123 DEBUG nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.460 240123 WARNING nova.virt.libvirt.driver [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.460 240123 DEBUG nova.virt.libvirt.volume.mount [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.464 240123 DEBUG nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 23 18:53:52 compute-0 nova_compute[240119]: <domainCapabilities>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <domain>kvm</domain>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <arch>x86_64</arch>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <vcpu max='240'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <iothreads supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <os supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <enum name='firmware'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <loader supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>rom</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pflash</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='readonly'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>yes</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>no</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='secure'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>no</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </loader>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </os>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <cpu>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <mode name='host-passthrough' supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='hostPassthroughMigratable'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>on</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>off</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </mode>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <mode name='maximum' supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='maximumMigratable'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>on</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>off</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </mode>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <mode name='host-model' supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <vendor>AMD</vendor>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='x2apic'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='hypervisor'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='stibp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='ssbd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='overflow-recov'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='succor'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='lbrv'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='tsc-scale'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='flushbyasid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='pause-filter'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='pfthreshold'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='disable' name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </mode>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <mode name='custom' supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-noTSX'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='ClearwaterForest'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ddpd-u'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='intel-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ipred-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='lam'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rrsba-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sha512'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sm3'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sm4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='ClearwaterForest-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ddpd-u'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='intel-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ipred-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='lam'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rrsba-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sha512'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sm3'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sm4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cooperlake'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cooperlake-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cooperlake-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Denverton'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mpx'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Denverton-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mpx'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Denverton-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Denverton-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Dhyana-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Genoa'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fs-gs-base-ns'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='perfmon-v2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Milan'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Milan-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Milan-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Milan-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Rome'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Rome-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Rome-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Rome-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Turin'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vp2intersect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fs-gs-base-ns'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibpb-brtype'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='perfmon-v2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbpb'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='srso-user-kernel-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Turin-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vp2intersect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fs-gs-base-ns'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibpb-brtype'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='perfmon-v2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbpb'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='srso-user-kernel-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-v5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='GraniteRapids'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='GraniteRapids-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='GraniteRapids-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-128'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-256'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-512'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='GraniteRapids-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-128'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-256'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-512'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-noTSX'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v6'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v7'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='IvyBridge'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='IvyBridge-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='IvyBridge-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='IvyBridge-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='KnightsMill'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-4fmaps'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-4vnniw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512er'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512pf'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='KnightsMill-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-4fmaps'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-4vnniw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512er'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512pf'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Opteron_G4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fma4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xop'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Opteron_G4-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fma4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xop'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Opteron_G5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fma4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tbm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xop'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Opteron_G5-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fma4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tbm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xop'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SierraForest'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SierraForest-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SierraForest-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='intel-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ipred-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='lam'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rrsba-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SierraForest-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='intel-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ipred-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='lam'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rrsba-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='core-capability'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mpx'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='split-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='core-capability'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mpx'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='split-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='core-capability'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='split-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='core-capability'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='split-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='athlon'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnow'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnowext'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='athlon-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnow'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnowext'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='core2duo'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='core2duo-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='coreduo'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='coreduo-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='n270'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='n270-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='phenom'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnow'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnowext'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='phenom-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnow'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnowext'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </mode>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </cpu>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <memoryBacking supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <enum name='sourceType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>file</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>anonymous</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>memfd</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </memoryBacking>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <devices>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <disk supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='diskDevice'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>disk</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>cdrom</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>floppy</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>lun</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='bus'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>ide</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>fdc</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>scsi</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>usb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>sata</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio-transitional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio-non-transitional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </disk>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <graphics supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vnc</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>egl-headless</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>dbus</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </graphics>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <video supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='modelType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vga</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>cirrus</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>none</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>bochs</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>ramfb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </video>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <hostdev supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='mode'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>subsystem</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='startupPolicy'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>default</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>mandatory</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>requisite</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>optional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='subsysType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>usb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pci</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>scsi</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='capsType'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='pciBackend'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </hostdev>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <rng supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio-transitional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio-non-transitional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendModel'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>random</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>egd</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>builtin</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </rng>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <filesystem supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='driverType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>path</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>handle</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtiofs</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </filesystem>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <tpm supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>tpm-tis</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>tpm-crb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendModel'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>emulator</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>external</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendVersion'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>2.0</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </tpm>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <redirdev supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='bus'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>usb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </redirdev>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <channel supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pty</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>unix</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </channel>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <crypto supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>qemu</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendModel'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>builtin</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </crypto>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <interface supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>default</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>passt</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </interface>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <panic supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>isa</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>hyperv</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </panic>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <console supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>null</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vc</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pty</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>dev</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>file</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pipe</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>stdio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>udp</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>tcp</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>unix</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>qemu-vdagent</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>dbus</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </console>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </devices>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <features>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <gic supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <vmcoreinfo supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <genid supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <backingStoreInput supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <backup supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <async-teardown supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <s390-pv supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <ps2 supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <tdx supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <sev supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <sgx supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <hyperv supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='features'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>relaxed</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vapic</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>spinlocks</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vpindex</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>runtime</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>synic</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>stimer</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>reset</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vendor_id</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>frequencies</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>reenlightenment</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>tlbflush</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>ipi</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>avic</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>emsr_bitmap</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>xmm_input</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <defaults>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <spinlocks>4095</spinlocks>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <stimer_direct>on</stimer_direct>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </defaults>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </hyperv>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <launchSecurity supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </features>
Jan 23 18:53:52 compute-0 nova_compute[240119]: </domainCapabilities>
Jan 23 18:53:52 compute-0 nova_compute[240119]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.550 240123 DEBUG nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 23 18:53:52 compute-0 nova_compute[240119]: <domainCapabilities>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <domain>kvm</domain>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <arch>x86_64</arch>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <vcpu max='4096'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <iothreads supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <os supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <enum name='firmware'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>efi</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <loader supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>rom</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pflash</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='readonly'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>yes</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>no</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='secure'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>yes</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>no</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </loader>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </os>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <cpu>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <mode name='host-passthrough' supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='hostPassthroughMigratable'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>on</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>off</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </mode>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <mode name='maximum' supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='maximumMigratable'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>on</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>off</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </mode>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <mode name='host-model' supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <vendor>AMD</vendor>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='x2apic'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='hypervisor'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='stibp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='ssbd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='overflow-recov'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='succor'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='lbrv'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='tsc-scale'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='flushbyasid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='pause-filter'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='pfthreshold'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <feature policy='disable' name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </mode>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <mode name='custom' supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-noTSX'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Broadwell-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='ClearwaterForest'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ddpd-u'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='intel-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ipred-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='lam'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rrsba-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sha512'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sm3'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sm4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='ClearwaterForest-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ddpd-u'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='intel-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ipred-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='lam'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rrsba-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sha512'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sm3'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sm4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cooperlake'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cooperlake-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Cooperlake-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Denverton'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mpx'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Denverton-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mpx'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Denverton-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Denverton-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Dhyana-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Genoa'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fs-gs-base-ns'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='perfmon-v2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Milan'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Milan-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Milan-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Milan-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Rome'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Rome-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Rome-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Rome-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Turin'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vp2intersect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fs-gs-base-ns'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibpb-brtype'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='perfmon-v2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbpb'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='srso-user-kernel-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-Turin-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amd-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='auto-ibrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vp2intersect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fs-gs-base-ns'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibpb-brtype'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='no-nested-data-bp'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='null-sel-clr-base'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='perfmon-v2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbpb'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='srso-user-kernel-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='stibp-always-on'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='EPYC-v5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='GraniteRapids'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='GraniteRapids-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='GraniteRapids-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-128'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-256'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-512'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='GraniteRapids-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-128'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-256'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx10-512'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='prefetchiti'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-noTSX'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Haswell-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v6'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Icelake-Server-v7'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='IvyBridge'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='IvyBridge-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='IvyBridge-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='IvyBridge-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='KnightsMill'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-4fmaps'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-4vnniw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512er'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512pf'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='KnightsMill-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-4fmaps'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-4vnniw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512er'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512pf'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Opteron_G4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fma4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xop'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Opteron_G4-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fma4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xop'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Opteron_G5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fma4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tbm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xop'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Opteron_G5-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fma4'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tbm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xop'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SapphireRapids-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='amx-tile'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-bf16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-fp16'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512-vpopcntdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bitalg'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vbmi2'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrc'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fzrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='la57'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='taa-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='tsx-ldtrk'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SierraForest'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SierraForest-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SierraForest-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='intel-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ipred-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='lam'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rrsba-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='SierraForest-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ifma'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-ne-convert'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx-vnni-int8'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bhi-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='bus-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cmpccxadd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fbsdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='fsrs'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ibrs-all'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='intel-psfd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ipred-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='lam'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mcdt-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pbrsb-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='psdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rrsba-ctrl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='sbdr-ssdp-no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='serialize'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vaes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='vpclmulqdq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Client-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='hle'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='rtm'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Skylake-Server-v5'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512bw'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512cd'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512dq'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512f'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='avx512vl'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='invpcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pcid'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='pku'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='core-capability'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mpx'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='split-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='core-capability'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='mpx'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='split-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge-v2'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='core-capability'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='split-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge-v3'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='core-capability'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='split-lock-detect'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='Snowridge-v4'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='cldemote'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='erms'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='gfni'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdir64b'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='movdiri'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='xsaves'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='athlon'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnow'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnowext'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='athlon-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnow'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnowext'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='core2duo'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='core2duo-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='coreduo'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='coreduo-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='n270'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='n270-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='ss'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='phenom'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnow'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnowext'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <blockers model='phenom-v1'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnow'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <feature name='3dnowext'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </blockers>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </mode>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </cpu>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <memoryBacking supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <enum name='sourceType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>file</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>anonymous</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <value>memfd</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </memoryBacking>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <devices>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <disk supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='diskDevice'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>disk</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>cdrom</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>floppy</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>lun</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='bus'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>fdc</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>scsi</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>usb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>sata</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio-transitional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio-non-transitional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </disk>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <graphics supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vnc</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>egl-headless</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>dbus</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </graphics>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <video supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='modelType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vga</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>cirrus</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>none</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>bochs</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>ramfb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </video>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <hostdev supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='mode'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>subsystem</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='startupPolicy'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>default</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>mandatory</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>requisite</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>optional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='subsysType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>usb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pci</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>scsi</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='capsType'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='pciBackend'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </hostdev>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <rng supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio-transitional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtio-non-transitional</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendModel'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>random</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>egd</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>builtin</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </rng>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <filesystem supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='driverType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>path</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>handle</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>virtiofs</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </filesystem>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <tpm supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>tpm-tis</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>tpm-crb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendModel'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>emulator</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>external</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendVersion'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>2.0</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </tpm>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <redirdev supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='bus'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>usb</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </redirdev>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <channel supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pty</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>unix</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </channel>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <crypto supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>qemu</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendModel'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>builtin</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </crypto>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <interface supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='backendType'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>default</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>passt</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </interface>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <panic supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='model'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>isa</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>hyperv</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </panic>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <console supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='type'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>null</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vc</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pty</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>dev</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>file</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>pipe</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>stdio</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>udp</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>tcp</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>unix</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>qemu-vdagent</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>dbus</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </console>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </devices>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   <features>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <gic supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <vmcoreinfo supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <genid supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <backingStoreInput supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <backup supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <async-teardown supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <s390-pv supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <ps2 supported='yes'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <tdx supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <sev supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <sgx supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <hyperv supported='yes'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <enum name='features'>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>relaxed</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vapic</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>spinlocks</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vpindex</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>runtime</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>synic</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>stimer</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>reset</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>vendor_id</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>frequencies</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>reenlightenment</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>tlbflush</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>ipi</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>avic</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>emsr_bitmap</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <value>xmm_input</value>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </enum>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       <defaults>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <spinlocks>4095</spinlocks>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <stimer_direct>on</stimer_direct>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 18:53:52 compute-0 nova_compute[240119]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 18:53:52 compute-0 nova_compute[240119]:       </defaults>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     </hyperv>
Jan 23 18:53:52 compute-0 nova_compute[240119]:     <launchSecurity supported='no'/>
Jan 23 18:53:52 compute-0 nova_compute[240119]:   </features>
Jan 23 18:53:52 compute-0 nova_compute[240119]: </domainCapabilities>
Jan 23 18:53:52 compute-0 nova_compute[240119]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.638 240123 DEBUG nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.638 240123 DEBUG nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.638 240123 DEBUG nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.645 240123 INFO nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Secure Boot support detected
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.647 240123 INFO nova.virt.libvirt.driver [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.647 240123 INFO nova.virt.libvirt.driver [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.656 240123 DEBUG nova.virt.libvirt.driver [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.835 240123 INFO nova.virt.node [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Determined node identity 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 from /var/lib/nova/compute_id
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.870 240123 WARNING nova.compute.manager [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Compute nodes ['64fe6ca7-28b4-4adf-a124-1c61d6858eb5'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.942 240123 INFO nova.compute.manager [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.985 240123 WARNING nova.compute.manager [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.985 240123 DEBUG oslo_concurrency.lockutils [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.986 240123 DEBUG oslo_concurrency.lockutils [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.986 240123 DEBUG oslo_concurrency.lockutils [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.986 240123 DEBUG nova.compute.resource_tracker [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 18:53:52 compute-0 nova_compute[240119]: 2026-01-23 18:53:52.986 240123 DEBUG oslo_concurrency.processutils [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 18:53:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:53:53 compute-0 ceph-mon[75097]: pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 18:53:53 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/687982147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:53:53 compute-0 nova_compute[240119]: 2026-01-23 18:53:53.570 240123 DEBUG oslo_concurrency.processutils [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 18:53:53 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 23 18:53:53 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 23 18:53:53 compute-0 nova_compute[240119]: 2026-01-23 18:53:53.877 240123 WARNING nova.virt.libvirt.driver [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 18:53:53 compute-0 nova_compute[240119]: 2026-01-23 18:53:53.879 240123 DEBUG nova.compute.resource_tracker [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5071MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 18:53:53 compute-0 nova_compute[240119]: 2026-01-23 18:53:53.879 240123 DEBUG oslo_concurrency.lockutils [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:53:53 compute-0 nova_compute[240119]: 2026-01-23 18:53:53.879 240123 DEBUG oslo_concurrency.lockutils [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:53:53 compute-0 nova_compute[240119]: 2026-01-23 18:53:53.960 240123 WARNING nova.compute.resource_tracker [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] No compute node record for compute-0.ctlplane.example.com:64fe6ca7-28b4-4adf-a124-1c61d6858eb5: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 could not be found.
Jan 23 18:53:54 compute-0 nova_compute[240119]: 2026-01-23 18:53:54.159 240123 INFO nova.compute.resource_tracker [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5
Jan 23 18:53:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:54 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/687982147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:53:54 compute-0 nova_compute[240119]: 2026-01-23 18:53:54.383 240123 DEBUG nova.compute.resource_tracker [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 18:53:54 compute-0 nova_compute[240119]: 2026-01-23 18:53:54.383 240123 DEBUG nova.compute.resource_tracker [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 18:53:55 compute-0 ceph-mon[75097]: pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:55 compute-0 nova_compute[240119]: 2026-01-23 18:53:55.459 240123 INFO nova.scheduler.client.report [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] [req-8fc4f877-4f51-44d3-aa36-5236a38ea6f2] Created resource provider record via placement API for resource provider with UUID 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 and name compute-0.ctlplane.example.com.
Jan 23 18:53:55 compute-0 nova_compute[240119]: 2026-01-23 18:53:55.870 240123 DEBUG oslo_concurrency.processutils [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 18:53:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 18:53:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1706698341' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:53:56 compute-0 nova_compute[240119]: 2026-01-23 18:53:56.448 240123 DEBUG oslo_concurrency.processutils [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 18:53:56 compute-0 nova_compute[240119]: 2026-01-23 18:53:56.452 240123 DEBUG nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 23 18:53:56 compute-0 nova_compute[240119]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 23 18:53:56 compute-0 nova_compute[240119]: 2026-01-23 18:53:56.453 240123 INFO nova.virt.libvirt.host [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] kernel doesn't support AMD SEV
Jan 23 18:53:56 compute-0 nova_compute[240119]: 2026-01-23 18:53:56.454 240123 DEBUG nova.compute.provider_tree [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Updating inventory in ProviderTree for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 18:53:56 compute-0 nova_compute[240119]: 2026-01-23 18:53:56.454 240123 DEBUG nova.virt.libvirt.driver [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 23 18:53:56 compute-0 nova_compute[240119]: 2026-01-23 18:53:56.506 240123 DEBUG nova.scheduler.client.report [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Updated inventory for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 23 18:53:56 compute-0 nova_compute[240119]: 2026-01-23 18:53:56.506 240123 DEBUG nova.compute.provider_tree [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Updating resource provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 23 18:53:56 compute-0 nova_compute[240119]: 2026-01-23 18:53:56.506 240123 DEBUG nova.compute.provider_tree [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Updating inventory in ProviderTree for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 18:53:56 compute-0 nova_compute[240119]: 2026-01-23 18:53:56.618 240123 DEBUG nova.compute.provider_tree [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Updating resource provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 23 18:53:56 compute-0 nova_compute[240119]: 2026-01-23 18:53:56.644 240123 DEBUG nova.compute.resource_tracker [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 18:53:56 compute-0 nova_compute[240119]: 2026-01-23 18:53:56.645 240123 DEBUG oslo_concurrency.lockutils [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:53:56 compute-0 nova_compute[240119]: 2026-01-23 18:53:56.645 240123 DEBUG nova.service [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 23 18:53:56 compute-0 nova_compute[240119]: 2026-01-23 18:53:56.773 240123 DEBUG nova.service [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 23 18:53:56 compute-0 nova_compute[240119]: 2026-01-23 18:53:56.774 240123 DEBUG nova.servicegroup.drivers.db [None req-10bff35a-684b-4de5-a86f-49329c9739ab - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 23 18:53:57 compute-0 ceph-mon[75097]: pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1706698341' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:53:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:53:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:53:59 compute-0 ceph-mon[75097]: pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:54:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:54:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:54:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:54:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:54:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:54:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:00 compute-0 ceph-mon[75097]: pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:02 compute-0 ceph-mon[75097]: pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:54:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:05 compute-0 ceph-mon[75097]: pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:06 compute-0 ceph-mon[75097]: pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:06 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 18:54:06 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3245 writes, 14K keys, 3245 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3245 writes, 3245 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1250 writes, 5465 keys, 1250 commit groups, 1.0 writes per commit group, ingest: 8.40 MB, 0.01 MB/s
                                           Interval WAL: 1250 writes, 1250 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     55.1      0.27              0.05         6    0.045       0      0       0.0       0.0
                                             L6      1/0    7.19 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4     67.3     55.4      0.64              0.12         5    0.128     19K   2203       0.0       0.0
                                            Sum      1/0    7.19 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4     47.4     55.3      0.91              0.18        11    0.083     19K   2203       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5     87.0     88.5      0.31              0.10         6    0.051     12K   1457       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     67.3     55.4      0.64              0.12         5    0.128     19K   2203       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     55.8      0.26              0.05         5    0.053       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.014, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.05 GB write, 0.04 MB/s write, 0.04 GB read, 0.04 MB/s read, 0.9 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.04 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5639441df8d0#2 capacity: 308.00 MB usage: 1.61 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(90,1.43 MB,0.463471%) FilterBlock(12,63.17 KB,0.0200296%) IndexBlock(12,126.39 KB,0.0400741%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 23 18:54:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:54:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:09 compute-0 ceph-mon[75097]: pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:11 compute-0 ceph-mon[75097]: pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:11 compute-0 nova_compute[240119]: 2026-01-23 18:54:11.776 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:54:11 compute-0 nova_compute[240119]: 2026-01-23 18:54:11.908 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:54:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:54:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:54:13.574 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:54:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:54:13.575 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:54:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:54:13.575 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:54:13 compute-0 ceph-mon[75097]: pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:14 compute-0 podman[240507]: 2026-01-23 18:54:14.079690379 +0000 UTC m=+0.153053524 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 18:54:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:14 compute-0 ceph-mon[75097]: pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:16 compute-0 sudo[240533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:54:16 compute-0 sudo[240533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:54:16 compute-0 sudo[240533]: pam_unix(sudo:session): session closed for user root
Jan 23 18:54:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:16 compute-0 sudo[240558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:54:16 compute-0 sudo[240558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:54:16 compute-0 sudo[240558]: pam_unix(sudo:session): session closed for user root
Jan 23 18:54:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:54:17 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:54:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:54:17 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:54:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:54:17 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:54:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:54:17 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:54:17 compute-0 ceph-mon[75097]: pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:17 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:54:17 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:54:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:54:17 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:54:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:54:17 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:54:17 compute-0 sudo[240615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:54:17 compute-0 sudo[240615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:54:17 compute-0 sudo[240615]: pam_unix(sudo:session): session closed for user root
Jan 23 18:54:17 compute-0 sudo[240640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:54:17 compute-0 sudo[240640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:54:17 compute-0 podman[240676]: 2026-01-23 18:54:17.517762678 +0000 UTC m=+0.045530246 container create 5a9e40755c40c98bbea3d145b0f0e90b1d71d0c2c2eb9f06a2b688b176c7ebe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 23 18:54:17 compute-0 systemd[1]: Started libpod-conmon-5a9e40755c40c98bbea3d145b0f0e90b1d71d0c2c2eb9f06a2b688b176c7ebe5.scope.
Jan 23 18:54:17 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:54:17 compute-0 podman[240676]: 2026-01-23 18:54:17.497732113 +0000 UTC m=+0.025499701 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:54:17 compute-0 podman[240676]: 2026-01-23 18:54:17.594941841 +0000 UTC m=+0.122709429 container init 5a9e40755c40c98bbea3d145b0f0e90b1d71d0c2c2eb9f06a2b688b176c7ebe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:54:17 compute-0 podman[240676]: 2026-01-23 18:54:17.603888268 +0000 UTC m=+0.131655836 container start 5a9e40755c40c98bbea3d145b0f0e90b1d71d0c2c2eb9f06a2b688b176c7ebe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:54:17 compute-0 podman[240676]: 2026-01-23 18:54:17.607439084 +0000 UTC m=+0.135206672 container attach 5a9e40755c40c98bbea3d145b0f0e90b1d71d0c2c2eb9f06a2b688b176c7ebe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 23 18:54:17 compute-0 friendly_vaughan[240693]: 167 167
Jan 23 18:54:17 compute-0 systemd[1]: libpod-5a9e40755c40c98bbea3d145b0f0e90b1d71d0c2c2eb9f06a2b688b176c7ebe5.scope: Deactivated successfully.
Jan 23 18:54:17 compute-0 podman[240676]: 2026-01-23 18:54:17.610474718 +0000 UTC m=+0.138242316 container died 5a9e40755c40c98bbea3d145b0f0e90b1d71d0c2c2eb9f06a2b688b176c7ebe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 18:54:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e26a2293132b8591fe6d66b9fc08d71fd45c448a6ab69584b028b98998915dd9-merged.mount: Deactivated successfully.
Jan 23 18:54:17 compute-0 podman[240676]: 2026-01-23 18:54:17.659711383 +0000 UTC m=+0.187478951 container remove 5a9e40755c40c98bbea3d145b0f0e90b1d71d0c2c2eb9f06a2b688b176c7ebe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 23 18:54:17 compute-0 systemd[1]: libpod-conmon-5a9e40755c40c98bbea3d145b0f0e90b1d71d0c2c2eb9f06a2b688b176c7ebe5.scope: Deactivated successfully.
Jan 23 18:54:17 compute-0 podman[240717]: 2026-01-23 18:54:17.823137429 +0000 UTC m=+0.044902781 container create 4b0601bec71d8ac5cffef2dec4b27d0a3f0c96085050acf99804c973e87f764e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shockley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:54:17 compute-0 systemd[1]: Started libpod-conmon-4b0601bec71d8ac5cffef2dec4b27d0a3f0c96085050acf99804c973e87f764e.scope.
Jan 23 18:54:17 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90dbb7137262c32604d8039f6559fd3885c1d9ba3f5074451bcddb7d1fd3e125/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:54:17 compute-0 podman[240717]: 2026-01-23 18:54:17.80138589 +0000 UTC m=+0.023151262 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90dbb7137262c32604d8039f6559fd3885c1d9ba3f5074451bcddb7d1fd3e125/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90dbb7137262c32604d8039f6559fd3885c1d9ba3f5074451bcddb7d1fd3e125/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90dbb7137262c32604d8039f6559fd3885c1d9ba3f5074451bcddb7d1fd3e125/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90dbb7137262c32604d8039f6559fd3885c1d9ba3f5074451bcddb7d1fd3e125/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:54:17 compute-0 podman[240717]: 2026-01-23 18:54:17.931175021 +0000 UTC m=+0.152940403 container init 4b0601bec71d8ac5cffef2dec4b27d0a3f0c96085050acf99804c973e87f764e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shockley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 23 18:54:17 compute-0 podman[240717]: 2026-01-23 18:54:17.940308152 +0000 UTC m=+0.162073504 container start 4b0601bec71d8ac5cffef2dec4b27d0a3f0c96085050acf99804c973e87f764e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shockley, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Jan 23 18:54:17 compute-0 podman[240717]: 2026-01-23 18:54:17.944950644 +0000 UTC m=+0.166716006 container attach 4b0601bec71d8ac5cffef2dec4b27d0a3f0c96085050acf99804c973e87f764e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shockley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 18:54:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:54:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:54:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:54:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:54:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:54:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:18 compute-0 optimistic_shockley[240733]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:54:18 compute-0 optimistic_shockley[240733]: --> All data devices are unavailable
Jan 23 18:54:18 compute-0 systemd[1]: libpod-4b0601bec71d8ac5cffef2dec4b27d0a3f0c96085050acf99804c973e87f764e.scope: Deactivated successfully.
Jan 23 18:54:18 compute-0 podman[240753]: 2026-01-23 18:54:18.533179228 +0000 UTC m=+0.032030838 container died 4b0601bec71d8ac5cffef2dec4b27d0a3f0c96085050acf99804c973e87f764e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shockley, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 23 18:54:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 18:54:20 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3572901932' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 18:54:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 18:54:20 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3572901932' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 18:54:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 18:54:20 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2694833788' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 18:54:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 18:54:20 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2694833788' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 18:54:21 compute-0 ceph-mon[75097]: pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-90dbb7137262c32604d8039f6559fd3885c1d9ba3f5074451bcddb7d1fd3e125-merged.mount: Deactivated successfully.
Jan 23 18:54:21 compute-0 podman[240753]: 2026-01-23 18:54:21.160341279 +0000 UTC m=+2.659192859 container remove 4b0601bec71d8ac5cffef2dec4b27d0a3f0c96085050acf99804c973e87f764e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:54:21 compute-0 systemd[1]: libpod-conmon-4b0601bec71d8ac5cffef2dec4b27d0a3f0c96085050acf99804c973e87f764e.scope: Deactivated successfully.
Jan 23 18:54:21 compute-0 sudo[240640]: pam_unix(sudo:session): session closed for user root
Jan 23 18:54:21 compute-0 sudo[240768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:54:21 compute-0 sudo[240768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:54:21 compute-0 sudo[240768]: pam_unix(sudo:session): session closed for user root
Jan 23 18:54:21 compute-0 sudo[240793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:54:21 compute-0 sudo[240793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:54:21 compute-0 podman[240830]: 2026-01-23 18:54:21.704107105 +0000 UTC m=+0.040764420 container create a459340cc778fe128317c777095217e98131068a86bed298443401fbb30ceb3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_raman, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 18:54:21 compute-0 systemd[1]: Started libpod-conmon-a459340cc778fe128317c777095217e98131068a86bed298443401fbb30ceb3e.scope.
Jan 23 18:54:21 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:54:21 compute-0 podman[240830]: 2026-01-23 18:54:21.688881755 +0000 UTC m=+0.025539090 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:54:21 compute-0 podman[240830]: 2026-01-23 18:54:21.788743828 +0000 UTC m=+0.125401163 container init a459340cc778fe128317c777095217e98131068a86bed298443401fbb30ceb3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_raman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:54:21 compute-0 podman[240830]: 2026-01-23 18:54:21.798635988 +0000 UTC m=+0.135293303 container start a459340cc778fe128317c777095217e98131068a86bed298443401fbb30ceb3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_raman, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:54:21 compute-0 happy_raman[240846]: 167 167
Jan 23 18:54:21 compute-0 podman[240830]: 2026-01-23 18:54:21.804478901 +0000 UTC m=+0.141136236 container attach a459340cc778fe128317c777095217e98131068a86bed298443401fbb30ceb3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:54:21 compute-0 systemd[1]: libpod-a459340cc778fe128317c777095217e98131068a86bed298443401fbb30ceb3e.scope: Deactivated successfully.
Jan 23 18:54:21 compute-0 podman[240830]: 2026-01-23 18:54:21.805080065 +0000 UTC m=+0.141737390 container died a459340cc778fe128317c777095217e98131068a86bed298443401fbb30ceb3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_raman, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 23 18:54:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1aa75956a83673f775cb45e670539b945415f89b7f58933bd8de68409f75eaa-merged.mount: Deactivated successfully.
Jan 23 18:54:21 compute-0 podman[240830]: 2026-01-23 18:54:21.84319742 +0000 UTC m=+0.179854735 container remove a459340cc778fe128317c777095217e98131068a86bed298443401fbb30ceb3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 18:54:21 compute-0 systemd[1]: libpod-conmon-a459340cc778fe128317c777095217e98131068a86bed298443401fbb30ceb3e.scope: Deactivated successfully.
Jan 23 18:54:22 compute-0 podman[240869]: 2026-01-23 18:54:21.994254645 +0000 UTC m=+0.025603592 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:54:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 18:54:22 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2558899406' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 18:54:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 18:54:22 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2558899406' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 18:54:23 compute-0 ceph-mon[75097]: pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:23 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/3572901932' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 18:54:23 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/3572901932' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 18:54:23 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/2694833788' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 18:54:23 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/2694833788' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 18:54:23 compute-0 podman[240869]: 2026-01-23 18:54:23.045883394 +0000 UTC m=+1.077232291 container create 94605aed8b89d2e91509ced2be80128f8df5be7a37608535bd92e868b499c1ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:54:23 compute-0 systemd[1]: Started libpod-conmon-94605aed8b89d2e91509ced2be80128f8df5be7a37608535bd92e868b499c1ce.scope.
Jan 23 18:54:23 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe92d3f2c345a3c3dc5b4dd493fa187c7d148545278d91211ec8609f0dfd036/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe92d3f2c345a3c3dc5b4dd493fa187c7d148545278d91211ec8609f0dfd036/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe92d3f2c345a3c3dc5b4dd493fa187c7d148545278d91211ec8609f0dfd036/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe92d3f2c345a3c3dc5b4dd493fa187c7d148545278d91211ec8609f0dfd036/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:54:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:54:23 compute-0 podman[240869]: 2026-01-23 18:54:23.8975497 +0000 UTC m=+1.928898617 container init 94605aed8b89d2e91509ced2be80128f8df5be7a37608535bd92e868b499c1ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 23 18:54:23 compute-0 podman[240869]: 2026-01-23 18:54:23.907289417 +0000 UTC m=+1.938638314 container start 94605aed8b89d2e91509ced2be80128f8df5be7a37608535bd92e868b499c1ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 18:54:23 compute-0 podman[240869]: 2026-01-23 18:54:23.914960633 +0000 UTC m=+1.946309550 container attach 94605aed8b89d2e91509ced2be80128f8df5be7a37608535bd92e868b499c1ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 23 18:54:23 compute-0 podman[240883]: 2026-01-23 18:54:23.966300209 +0000 UTC m=+1.040937601 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 18:54:24 compute-0 ceph-mon[75097]: pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:24 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/2558899406' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 18:54:24 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/2558899406' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 18:54:24 compute-0 tender_williams[240897]: {
Jan 23 18:54:24 compute-0 tender_williams[240897]:     "0": [
Jan 23 18:54:24 compute-0 tender_williams[240897]:         {
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "devices": [
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "/dev/loop3"
Jan 23 18:54:24 compute-0 tender_williams[240897]:             ],
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "lv_name": "ceph_lv0",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "lv_size": "21470642176",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "name": "ceph_lv0",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "tags": {
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.cluster_name": "ceph",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.crush_device_class": "",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.encrypted": "0",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.objectstore": "bluestore",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.osd_id": "0",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.type": "block",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.vdo": "0",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.with_tpm": "0"
Jan 23 18:54:24 compute-0 tender_williams[240897]:             },
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "type": "block",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "vg_name": "ceph_vg0"
Jan 23 18:54:24 compute-0 tender_williams[240897]:         }
Jan 23 18:54:24 compute-0 tender_williams[240897]:     ],
Jan 23 18:54:24 compute-0 tender_williams[240897]:     "1": [
Jan 23 18:54:24 compute-0 tender_williams[240897]:         {
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "devices": [
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "/dev/loop4"
Jan 23 18:54:24 compute-0 tender_williams[240897]:             ],
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "lv_name": "ceph_lv1",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "lv_size": "21470642176",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "name": "ceph_lv1",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "tags": {
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.cluster_name": "ceph",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.crush_device_class": "",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.encrypted": "0",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.objectstore": "bluestore",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.osd_id": "1",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.type": "block",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.vdo": "0",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.with_tpm": "0"
Jan 23 18:54:24 compute-0 tender_williams[240897]:             },
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "type": "block",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "vg_name": "ceph_vg1"
Jan 23 18:54:24 compute-0 tender_williams[240897]:         }
Jan 23 18:54:24 compute-0 tender_williams[240897]:     ],
Jan 23 18:54:24 compute-0 tender_williams[240897]:     "2": [
Jan 23 18:54:24 compute-0 tender_williams[240897]:         {
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "devices": [
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "/dev/loop5"
Jan 23 18:54:24 compute-0 tender_williams[240897]:             ],
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "lv_name": "ceph_lv2",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "lv_size": "21470642176",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "name": "ceph_lv2",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "tags": {
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.cluster_name": "ceph",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.crush_device_class": "",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.encrypted": "0",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.objectstore": "bluestore",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.osd_id": "2",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.type": "block",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.vdo": "0",
Jan 23 18:54:24 compute-0 tender_williams[240897]:                 "ceph.with_tpm": "0"
Jan 23 18:54:24 compute-0 tender_williams[240897]:             },
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "type": "block",
Jan 23 18:54:24 compute-0 tender_williams[240897]:             "vg_name": "ceph_vg2"
Jan 23 18:54:24 compute-0 tender_williams[240897]:         }
Jan 23 18:54:24 compute-0 tender_williams[240897]:     ]
Jan 23 18:54:24 compute-0 tender_williams[240897]: }
Jan 23 18:54:24 compute-0 systemd[1]: libpod-94605aed8b89d2e91509ced2be80128f8df5be7a37608535bd92e868b499c1ce.scope: Deactivated successfully.
Jan 23 18:54:24 compute-0 podman[240869]: 2026-01-23 18:54:24.227007045 +0000 UTC m=+2.258355942 container died 94605aed8b89d2e91509ced2be80128f8df5be7a37608535bd92e868b499c1ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_williams, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:54:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-efe92d3f2c345a3c3dc5b4dd493fa187c7d148545278d91211ec8609f0dfd036-merged.mount: Deactivated successfully.
Jan 23 18:54:24 compute-0 podman[240869]: 2026-01-23 18:54:24.405988829 +0000 UTC m=+2.437337726 container remove 94605aed8b89d2e91509ced2be80128f8df5be7a37608535bd92e868b499c1ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 23 18:54:24 compute-0 systemd[1]: libpod-conmon-94605aed8b89d2e91509ced2be80128f8df5be7a37608535bd92e868b499c1ce.scope: Deactivated successfully.
Jan 23 18:54:24 compute-0 sudo[240793]: pam_unix(sudo:session): session closed for user root
Jan 23 18:54:24 compute-0 sudo[240927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:54:24 compute-0 sudo[240927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:54:24 compute-0 sudo[240927]: pam_unix(sudo:session): session closed for user root
Jan 23 18:54:24 compute-0 sudo[240952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:54:24 compute-0 sudo[240952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:54:24 compute-0 podman[240989]: 2026-01-23 18:54:24.837352467 +0000 UTC m=+0.022487037 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:54:25 compute-0 podman[240989]: 2026-01-23 18:54:25.127807494 +0000 UTC m=+0.312942024 container create f22f5ee26512312ddaa8d36cc414372311888394b001662bcd11cb5398b362d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 23 18:54:25 compute-0 ceph-mon[75097]: pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:25 compute-0 systemd[1]: Started libpod-conmon-f22f5ee26512312ddaa8d36cc414372311888394b001662bcd11cb5398b362d4.scope.
Jan 23 18:54:25 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:54:25 compute-0 podman[240989]: 2026-01-23 18:54:25.885241535 +0000 UTC m=+1.070376085 container init f22f5ee26512312ddaa8d36cc414372311888394b001662bcd11cb5398b362d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 23 18:54:25 compute-0 podman[240989]: 2026-01-23 18:54:25.893046554 +0000 UTC m=+1.078181084 container start f22f5ee26512312ddaa8d36cc414372311888394b001662bcd11cb5398b362d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_johnson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:54:25 compute-0 bold_johnson[241005]: 167 167
Jan 23 18:54:25 compute-0 systemd[1]: libpod-f22f5ee26512312ddaa8d36cc414372311888394b001662bcd11cb5398b362d4.scope: Deactivated successfully.
Jan 23 18:54:26 compute-0 podman[240989]: 2026-01-23 18:54:26.190987473 +0000 UTC m=+1.376122093 container attach f22f5ee26512312ddaa8d36cc414372311888394b001662bcd11cb5398b362d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_johnson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 18:54:26 compute-0 podman[240989]: 2026-01-23 18:54:26.192212673 +0000 UTC m=+1.377347233 container died f22f5ee26512312ddaa8d36cc414372311888394b001662bcd11cb5398b362d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_johnson, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 23 18:54:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b53b6002999131ee98dff1aa526fe52aa0ae3f3caaff49784a0f942a69267a07-merged.mount: Deactivated successfully.
Jan 23 18:54:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:26 compute-0 podman[240989]: 2026-01-23 18:54:26.617113725 +0000 UTC m=+1.802248255 container remove f22f5ee26512312ddaa8d36cc414372311888394b001662bcd11cb5398b362d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_johnson, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:54:26 compute-0 systemd[1]: libpod-conmon-f22f5ee26512312ddaa8d36cc414372311888394b001662bcd11cb5398b362d4.scope: Deactivated successfully.
Jan 23 18:54:26 compute-0 podman[241031]: 2026-01-23 18:54:26.779190798 +0000 UTC m=+0.022486228 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:54:26 compute-0 podman[241031]: 2026-01-23 18:54:26.959857362 +0000 UTC m=+0.203152772 container create e4898b1759616dfc0abd3c38855f77dbbe196ab37b9c964e7d0c7a85c743e095 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:54:26 compute-0 ceph-mon[75097]: pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:27 compute-0 systemd[1]: Started libpod-conmon-e4898b1759616dfc0abd3c38855f77dbbe196ab37b9c964e7d0c7a85c743e095.scope.
Jan 23 18:54:27 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3aa0c15a2ec2bae48bbd710dae39d75db9fc98893239bece719ce28e36058f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3aa0c15a2ec2bae48bbd710dae39d75db9fc98893239bece719ce28e36058f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3aa0c15a2ec2bae48bbd710dae39d75db9fc98893239bece719ce28e36058f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3aa0c15a2ec2bae48bbd710dae39d75db9fc98893239bece719ce28e36058f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:54:27 compute-0 podman[241031]: 2026-01-23 18:54:27.170091023 +0000 UTC m=+0.413386453 container init e4898b1759616dfc0abd3c38855f77dbbe196ab37b9c964e7d0c7a85c743e095 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ritchie, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:54:27 compute-0 podman[241031]: 2026-01-23 18:54:27.178815045 +0000 UTC m=+0.422110445 container start e4898b1759616dfc0abd3c38855f77dbbe196ab37b9c964e7d0c7a85c743e095 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 23 18:54:27 compute-0 podman[241031]: 2026-01-23 18:54:27.182921084 +0000 UTC m=+0.426216514 container attach e4898b1759616dfc0abd3c38855f77dbbe196ab37b9c964e7d0c7a85c743e095 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 23 18:54:27 compute-0 lvm[241125]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:54:27 compute-0 lvm[241126]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:54:27 compute-0 lvm[241125]: VG ceph_vg0 finished
Jan 23 18:54:27 compute-0 lvm[241126]: VG ceph_vg1 finished
Jan 23 18:54:27 compute-0 lvm[241128]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:54:27 compute-0 lvm[241128]: VG ceph_vg2 finished
Jan 23 18:54:28 compute-0 eloquent_ritchie[241047]: {}
Jan 23 18:54:28 compute-0 podman[241031]: 2026-01-23 18:54:28.103151175 +0000 UTC m=+1.346446585 container died e4898b1759616dfc0abd3c38855f77dbbe196ab37b9c964e7d0c7a85c743e095 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 18:54:28 compute-0 systemd[1]: libpod-e4898b1759616dfc0abd3c38855f77dbbe196ab37b9c964e7d0c7a85c743e095.scope: Deactivated successfully.
Jan 23 18:54:28 compute-0 systemd[1]: libpod-e4898b1759616dfc0abd3c38855f77dbbe196ab37b9c964e7d0c7a85c743e095.scope: Consumed 1.368s CPU time.
Jan 23 18:54:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c3aa0c15a2ec2bae48bbd710dae39d75db9fc98893239bece719ce28e36058f-merged.mount: Deactivated successfully.
Jan 23 18:54:28 compute-0 podman[241031]: 2026-01-23 18:54:28.638706741 +0000 UTC m=+1.882002151 container remove e4898b1759616dfc0abd3c38855f77dbbe196ab37b9c964e7d0c7a85c743e095 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 23 18:54:28 compute-0 ceph-mon[75097]: pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:28 compute-0 sudo[240952]: pam_unix(sudo:session): session closed for user root
Jan 23 18:54:28 compute-0 systemd[1]: libpod-conmon-e4898b1759616dfc0abd3c38855f77dbbe196ab37b9c964e7d0c7a85c743e095.scope: Deactivated successfully.
Jan 23 18:54:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:54:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:54:28 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:54:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:54:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:54:28
Jan 23 18:54:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:54:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:54:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['vms', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'volumes']
Jan 23 18:54:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:54:29 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:54:29 compute-0 sudo[241146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:54:29 compute-0 sudo[241146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:54:29 compute-0 sudo[241146]: pam_unix(sudo:session): session closed for user root
Jan 23 18:54:29 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:54:29 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:54:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:54:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:54:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:54:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:54:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:54:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:54:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:30 compute-0 ceph-mon[75097]: pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:54:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:54:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:54:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:54:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:54:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:54:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:54:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:54:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:54:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:54:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:32 compute-0 ceph-mon[75097]: pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:54:34.202426) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194474202526, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1634, "num_deletes": 507, "total_data_size": 2212712, "memory_usage": 2261648, "flush_reason": "Manual Compaction"}
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 23 18:54:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194474488790, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2169918, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13421, "largest_seqno": 15054, "table_properties": {"data_size": 2162814, "index_size": 3662, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 17078, "raw_average_key_size": 18, "raw_value_size": 2146631, "raw_average_value_size": 2298, "num_data_blocks": 168, "num_entries": 934, "num_filter_entries": 934, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769194328, "oldest_key_time": 1769194328, "file_creation_time": 1769194474, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 286403 microseconds, and 7324 cpu microseconds.
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:54:34.488847) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2169918 bytes OK
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:54:34.488871) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:54:34.609547) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:54:34.609667) EVENT_LOG_v1 {"time_micros": 1769194474609654, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:54:34.609708) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2204559, prev total WAL file size 2204559, number of live WAL files 2.
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:54:34.611255) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(2119KB)], [32(7363KB)]
Jan 23 18:54:34 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194474611349, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9709770, "oldest_snapshot_seqno": -1}
Jan 23 18:54:35 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3905 keys, 7725766 bytes, temperature: kUnknown
Jan 23 18:54:35 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194475210305, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7725766, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7697517, "index_size": 17387, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9797, "raw_key_size": 95555, "raw_average_key_size": 24, "raw_value_size": 7624699, "raw_average_value_size": 1952, "num_data_blocks": 737, "num_entries": 3905, "num_filter_entries": 3905, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 1769194474, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:54:35 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 18:54:35 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:54:35.210650) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7725766 bytes
Jan 23 18:54:35 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:54:35.559128) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 16.2 rd, 12.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.2 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(8.0) write-amplify(3.6) OK, records in: 4932, records dropped: 1027 output_compression: NoCompression
Jan 23 18:54:35 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:54:35.559177) EVENT_LOG_v1 {"time_micros": 1769194475559153, "job": 14, "event": "compaction_finished", "compaction_time_micros": 599046, "compaction_time_cpu_micros": 35624, "output_level": 6, "num_output_files": 1, "total_output_size": 7725766, "num_input_records": 4932, "num_output_records": 3905, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 18:54:35 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:54:35 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194475560428, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 23 18:54:35 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:54:35 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194475562052, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 23 18:54:35 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:54:34.611032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:54:35 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:54:35.562171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:54:35 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:54:35.562175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:54:35 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:54:35.562177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:54:35 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:54:35.562179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:54:35 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:54:35.562180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:54:35 compute-0 ceph-mon[75097]: pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:36 compute-0 ceph-mon[75097]: pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:54:39 compute-0 ceph-mon[75097]: pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:54:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:54:41 compute-0 ceph-mon[75097]: pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:42 compute-0 ceph-mon[75097]: pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread fragmentation_score=0.000117 took=0.000012s
Jan 23 18:54:43 compute-0 ceph-osd[88072]: bluestore.MempoolThread fragmentation_score=0.000141 took=0.000048s
Jan 23 18:54:44 compute-0 ceph-osd[87009]: bluestore.MempoolThread fragmentation_score=0.000127 took=0.000037s
Jan 23 18:54:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:54:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:45 compute-0 podman[241171]: 2026-01-23 18:54:45.040765069 +0000 UTC m=+0.118605258 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 23 18:54:45 compute-0 ceph-mon[75097]: pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:46 compute-0 ceph-mon[75097]: pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:54:49 compute-0 ceph-mon[75097]: pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:51 compute-0 nova_compute[240119]: 2026-01-23 18:54:51.351 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:54:51 compute-0 nova_compute[240119]: 2026-01-23 18:54:51.351 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:54:51 compute-0 nova_compute[240119]: 2026-01-23 18:54:51.352 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 18:54:51 compute-0 nova_compute[240119]: 2026-01-23 18:54:51.352 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 18:54:51 compute-0 nova_compute[240119]: 2026-01-23 18:54:51.370 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 18:54:51 compute-0 nova_compute[240119]: 2026-01-23 18:54:51.370 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:54:51 compute-0 nova_compute[240119]: 2026-01-23 18:54:51.370 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:54:51 compute-0 nova_compute[240119]: 2026-01-23 18:54:51.371 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:54:51 compute-0 nova_compute[240119]: 2026-01-23 18:54:51.371 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:54:51 compute-0 nova_compute[240119]: 2026-01-23 18:54:51.371 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:54:51 compute-0 nova_compute[240119]: 2026-01-23 18:54:51.371 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:54:51 compute-0 nova_compute[240119]: 2026-01-23 18:54:51.372 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 18:54:51 compute-0 nova_compute[240119]: 2026-01-23 18:54:51.372 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:54:51 compute-0 nova_compute[240119]: 2026-01-23 18:54:51.406 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:54:51 compute-0 nova_compute[240119]: 2026-01-23 18:54:51.407 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:54:51 compute-0 nova_compute[240119]: 2026-01-23 18:54:51.407 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:54:51 compute-0 nova_compute[240119]: 2026-01-23 18:54:51.407 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 18:54:51 compute-0 nova_compute[240119]: 2026-01-23 18:54:51.408 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 18:54:51 compute-0 ceph-mon[75097]: pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 18:54:51 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2978971971' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:54:52 compute-0 nova_compute[240119]: 2026-01-23 18:54:52.018 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 18:54:52 compute-0 nova_compute[240119]: 2026-01-23 18:54:52.175 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 18:54:52 compute-0 nova_compute[240119]: 2026-01-23 18:54:52.176 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5107MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 18:54:52 compute-0 nova_compute[240119]: 2026-01-23 18:54:52.176 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:54:52 compute-0 nova_compute[240119]: 2026-01-23 18:54:52.177 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:54:52 compute-0 nova_compute[240119]: 2026-01-23 18:54:52.248 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 18:54:52 compute-0 nova_compute[240119]: 2026-01-23 18:54:52.248 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 18:54:52 compute-0 nova_compute[240119]: 2026-01-23 18:54:52.265 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 18:54:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:52 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2978971971' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:54:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 18:54:52 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1485371795' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:54:52 compute-0 nova_compute[240119]: 2026-01-23 18:54:52.833 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 18:54:52 compute-0 nova_compute[240119]: 2026-01-23 18:54:52.840 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 18:54:52 compute-0 nova_compute[240119]: 2026-01-23 18:54:52.858 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 18:54:52 compute-0 nova_compute[240119]: 2026-01-23 18:54:52.882 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 18:54:52 compute-0 nova_compute[240119]: 2026-01-23 18:54:52.883 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:54:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:54:54 compute-0 ceph-mon[75097]: pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:54 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1485371795' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:54:54 compute-0 podman[241242]: 2026-01-23 18:54:54.970973153 +0000 UTC m=+0.048187075 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 23 18:54:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:56 compute-0 ceph-mon[75097]: pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:58 compute-0 ceph-mon[75097]: pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:54:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:54:59 compute-0 ceph-mon[75097]: pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:55:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:55:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:55:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:55:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:55:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:55:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:00 compute-0 ceph-mon[75097]: pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:02 compute-0 ceph-mon[75097]: pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:55:04 compute-0 ceph-mon[75097]: pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:07 compute-0 ceph-mon[75097]: pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:55:09 compute-0 ceph-mon[75097]: pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:11 compute-0 ceph-mon[75097]: pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:13 compute-0 ceph-mon[75097]: pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:55:13.576 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:55:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:55:13.577 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:55:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:55:13.577 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:55:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:55:14 compute-0 ceph-mon[75097]: pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:16 compute-0 podman[241262]: 2026-01-23 18:55:16.002476211 +0000 UTC m=+0.080655796 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 18:55:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:17 compute-0 ceph-mon[75097]: pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:55:19 compute-0 ceph-mon[75097]: pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:21 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 18:55:21 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5791 writes, 24K keys, 5791 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5791 writes, 971 syncs, 5.96 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 18:55:21 compute-0 ceph-mon[75097]: pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:22 compute-0 ceph-mon[75097]: pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:55:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 23 18:55:25 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1813429476' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 23 18:55:25 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14336 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 23 18:55:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 23 18:55:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 23 18:55:25 compute-0 ceph-mon[75097]: pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:25 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1813429476' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 23 18:55:25 compute-0 podman[241289]: 2026-01-23 18:55:25.989368986 +0000 UTC m=+0.075086569 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 18:55:26 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 18:55:26 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 7030 writes, 29K keys, 7030 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7030 writes, 1364 syncs, 5.15 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b27a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b27a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b27a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 18:55:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:26 compute-0 ceph-mon[75097]: from='client.14336 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 23 18:55:27 compute-0 ceph-mon[75097]: pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:28 compute-0 ceph-mon[75097]: pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:55:28
Jan 23 18:55:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:55:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:55:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['vms', 'volumes', 'images', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.meta']
Jan 23 18:55:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:55:29 compute-0 sudo[241310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:55:29 compute-0 sudo[241310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:55:29 compute-0 sudo[241310]: pam_unix(sudo:session): session closed for user root
Jan 23 18:55:29 compute-0 sudo[241335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 23 18:55:29 compute-0 sudo[241335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:55:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:55:29 compute-0 sudo[241335]: pam_unix(sudo:session): session closed for user root
Jan 23 18:55:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:55:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:55:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:55:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:55:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:55:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:55:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:55:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:55:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:55:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:55:30 compute-0 sudo[241380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:55:30 compute-0 sudo[241380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:55:30 compute-0 sudo[241380]: pam_unix(sudo:session): session closed for user root
Jan 23 18:55:30 compute-0 sudo[241405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:55:30 compute-0 sudo[241405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:55:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:30 compute-0 sudo[241405]: pam_unix(sudo:session): session closed for user root
Jan 23 18:55:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:55:30 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:55:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:55:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:55:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:55:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:55:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:55:30 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:55:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:55:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:55:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:55:30 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:55:30 compute-0 sudo[241460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:55:30 compute-0 sudo[241460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:55:30 compute-0 sudo[241460]: pam_unix(sudo:session): session closed for user root
Jan 23 18:55:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:55:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:55:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:55:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:55:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:55:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:55:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:55:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:55:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:55:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:55:30 compute-0 sudo[241485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:55:30 compute-0 sudo[241485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:55:31 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 18:55:31 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 5664 writes, 24K keys, 5664 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5664 writes, 915 syncs, 6.19 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 18:55:31 compute-0 podman[241522]: 2026-01-23 18:55:31.259719822 +0000 UTC m=+0.019871785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:55:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:55:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:55:32 compute-0 ceph-mon[75097]: pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:55:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:55:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:55:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:55:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:55:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:55:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:32 compute-0 podman[241522]: 2026-01-23 18:55:32.651899602 +0000 UTC m=+1.412051545 container create 3a058329c233b52a65e230ccd67d9a76b9b957f8f9d14819f29cff9b4563497f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_visvesvaraya, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:55:33 compute-0 systemd[1]: Started libpod-conmon-3a058329c233b52a65e230ccd67d9a76b9b957f8f9d14819f29cff9b4563497f.scope.
Jan 23 18:55:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:55:33 compute-0 podman[241522]: 2026-01-23 18:55:33.523013954 +0000 UTC m=+2.283165917 container init 3a058329c233b52a65e230ccd67d9a76b9b957f8f9d14819f29cff9b4563497f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 18:55:33 compute-0 podman[241522]: 2026-01-23 18:55:33.530342622 +0000 UTC m=+2.290494565 container start 3a058329c233b52a65e230ccd67d9a76b9b957f8f9d14819f29cff9b4563497f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 18:55:33 compute-0 romantic_visvesvaraya[241538]: 167 167
Jan 23 18:55:33 compute-0 systemd[1]: libpod-3a058329c233b52a65e230ccd67d9a76b9b957f8f9d14819f29cff9b4563497f.scope: Deactivated successfully.
Jan 23 18:55:33 compute-0 ceph-mon[75097]: pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:33 compute-0 podman[241522]: 2026-01-23 18:55:33.79678987 +0000 UTC m=+2.556941833 container attach 3a058329c233b52a65e230ccd67d9a76b9b957f8f9d14819f29cff9b4563497f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_visvesvaraya, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 23 18:55:33 compute-0 podman[241522]: 2026-01-23 18:55:33.797204791 +0000 UTC m=+2.557356734 container died 3a058329c233b52a65e230ccd67d9a76b9b957f8f9d14819f29cff9b4563497f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:55:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d0976f5c075275b3ce62e70d2e58638f9725d872446796143a3aab253260007-merged.mount: Deactivated successfully.
Jan 23 18:55:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:55:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:34 compute-0 ceph-mon[75097]: pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:34 compute-0 podman[241522]: 2026-01-23 18:55:34.724979042 +0000 UTC m=+3.485130985 container remove 3a058329c233b52a65e230ccd67d9a76b9b957f8f9d14819f29cff9b4563497f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_visvesvaraya, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:55:34 compute-0 ceph-mgr[75390]: [devicehealth INFO root] Check health
Jan 23 18:55:34 compute-0 systemd[1]: libpod-conmon-3a058329c233b52a65e230ccd67d9a76b9b957f8f9d14819f29cff9b4563497f.scope: Deactivated successfully.
Jan 23 18:55:34 compute-0 podman[241561]: 2026-01-23 18:55:34.86204321 +0000 UTC m=+0.021964496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:55:35 compute-0 podman[241561]: 2026-01-23 18:55:35.086935686 +0000 UTC m=+0.246856952 container create 7b9bb56cc4561bf1f83e4ff844d4ca6e1db14e86821ace8ee0ca5df0fdc739c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 23 18:55:36 compute-0 systemd[1]: Started libpod-conmon-7b9bb56cc4561bf1f83e4ff844d4ca6e1db14e86821ace8ee0ca5df0fdc739c3.scope.
Jan 23 18:55:36 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:55:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44c6be83ec8adc44a1147d25494a6d657118d66d3d1ce03312e905d340842683/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:55:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44c6be83ec8adc44a1147d25494a6d657118d66d3d1ce03312e905d340842683/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:55:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44c6be83ec8adc44a1147d25494a6d657118d66d3d1ce03312e905d340842683/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:55:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44c6be83ec8adc44a1147d25494a6d657118d66d3d1ce03312e905d340842683/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:55:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44c6be83ec8adc44a1147d25494a6d657118d66d3d1ce03312e905d340842683/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:55:36 compute-0 podman[241561]: 2026-01-23 18:55:36.330734434 +0000 UTC m=+1.490655750 container init 7b9bb56cc4561bf1f83e4ff844d4ca6e1db14e86821ace8ee0ca5df0fdc739c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_vaughan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 23 18:55:36 compute-0 podman[241561]: 2026-01-23 18:55:36.339667782 +0000 UTC m=+1.499589058 container start 7b9bb56cc4561bf1f83e4ff844d4ca6e1db14e86821ace8ee0ca5df0fdc739c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:55:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:36 compute-0 podman[241561]: 2026-01-23 18:55:36.70837917 +0000 UTC m=+1.868300466 container attach 7b9bb56cc4561bf1f83e4ff844d4ca6e1db14e86821ace8ee0ca5df0fdc739c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_vaughan, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:55:36 compute-0 kind_vaughan[241577]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:55:36 compute-0 kind_vaughan[241577]: --> All data devices are unavailable
Jan 23 18:55:36 compute-0 systemd[1]: libpod-7b9bb56cc4561bf1f83e4ff844d4ca6e1db14e86821ace8ee0ca5df0fdc739c3.scope: Deactivated successfully.
Jan 23 18:55:36 compute-0 podman[241561]: 2026-01-23 18:55:36.793175785 +0000 UTC m=+1.953097071 container died 7b9bb56cc4561bf1f83e4ff844d4ca6e1db14e86821ace8ee0ca5df0fdc739c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 23 18:55:37 compute-0 ceph-mon[75097]: pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-44c6be83ec8adc44a1147d25494a6d657118d66d3d1ce03312e905d340842683-merged.mount: Deactivated successfully.
Jan 23 18:55:38 compute-0 podman[241561]: 2026-01-23 18:55:38.318080647 +0000 UTC m=+3.478001913 container remove 7b9bb56cc4561bf1f83e4ff844d4ca6e1db14e86821ace8ee0ca5df0fdc739c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:55:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:38 compute-0 sudo[241485]: pam_unix(sudo:session): session closed for user root
Jan 23 18:55:38 compute-0 systemd[1]: libpod-conmon-7b9bb56cc4561bf1f83e4ff844d4ca6e1db14e86821ace8ee0ca5df0fdc739c3.scope: Deactivated successfully.
Jan 23 18:55:38 compute-0 sudo[241608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:55:38 compute-0 sudo[241608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:55:38 compute-0 sudo[241608]: pam_unix(sudo:session): session closed for user root
Jan 23 18:55:38 compute-0 sudo[241633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:55:38 compute-0 sudo[241633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:55:38 compute-0 podman[241670]: 2026-01-23 18:55:38.807502564 +0000 UTC m=+0.089547161 container create 9d407ddc9890f95dfe67080965807f54c7a1a185d5107b5e6d72373bd0b65dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_payne, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 23 18:55:38 compute-0 podman[241670]: 2026-01-23 18:55:38.744347347 +0000 UTC m=+0.026392034 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:55:39 compute-0 systemd[1]: Started libpod-conmon-9d407ddc9890f95dfe67080965807f54c7a1a185d5107b5e6d72373bd0b65dd6.scope.
Jan 23 18:55:39 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:55:39 compute-0 podman[241670]: 2026-01-23 18:55:39.194406957 +0000 UTC m=+0.476451574 container init 9d407ddc9890f95dfe67080965807f54c7a1a185d5107b5e6d72373bd0b65dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 18:55:39 compute-0 podman[241670]: 2026-01-23 18:55:39.202510363 +0000 UTC m=+0.484554960 container start 9d407ddc9890f95dfe67080965807f54c7a1a185d5107b5e6d72373bd0b65dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_payne, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 23 18:55:39 compute-0 affectionate_payne[241687]: 167 167
Jan 23 18:55:39 compute-0 systemd[1]: libpod-9d407ddc9890f95dfe67080965807f54c7a1a185d5107b5e6d72373bd0b65dd6.scope: Deactivated successfully.
Jan 23 18:55:39 compute-0 podman[241670]: 2026-01-23 18:55:39.292551016 +0000 UTC m=+0.574595633 container attach 9d407ddc9890f95dfe67080965807f54c7a1a185d5107b5e6d72373bd0b65dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_payne, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 23 18:55:39 compute-0 podman[241670]: 2026-01-23 18:55:39.292925955 +0000 UTC m=+0.574970552 container died 9d407ddc9890f95dfe67080965807f54c7a1a185d5107b5e6d72373bd0b65dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_payne, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 18:55:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:55:39 compute-0 ceph-mon[75097]: pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-03559a6a79384e199b8b8d45a1a57886b225592940648ea4118f97fb6978b74e-merged.mount: Deactivated successfully.
Jan 23 18:55:40 compute-0 podman[241670]: 2026-01-23 18:55:40.007103506 +0000 UTC m=+1.289148103 container remove 9d407ddc9890f95dfe67080965807f54c7a1a185d5107b5e6d72373bd0b65dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_payne, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:55:40 compute-0 systemd[1]: libpod-conmon-9d407ddc9890f95dfe67080965807f54c7a1a185d5107b5e6d72373bd0b65dd6.scope: Deactivated successfully.
Jan 23 18:55:40 compute-0 podman[241710]: 2026-01-23 18:55:40.158256027 +0000 UTC m=+0.025757708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:55:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:55:40 compute-0 podman[241710]: 2026-01-23 18:55:40.885435903 +0000 UTC m=+0.752937554 container create 1f5efc940fc8a5bd067d485a339c4699f489681a1a90fd5eb78b960a13eda130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 18:55:41 compute-0 systemd[1]: Started libpod-conmon-1f5efc940fc8a5bd067d485a339c4699f489681a1a90fd5eb78b960a13eda130.scope.
Jan 23 18:55:41 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:55:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da4ffa90fb28c9bb1106c7f69f922c892025a801224bc61ee20670481a6fdaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:55:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da4ffa90fb28c9bb1106c7f69f922c892025a801224bc61ee20670481a6fdaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:55:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da4ffa90fb28c9bb1106c7f69f922c892025a801224bc61ee20670481a6fdaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:55:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da4ffa90fb28c9bb1106c7f69f922c892025a801224bc61ee20670481a6fdaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:55:41 compute-0 ceph-mon[75097]: pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:41 compute-0 podman[241710]: 2026-01-23 18:55:41.497717613 +0000 UTC m=+1.365219294 container init 1f5efc940fc8a5bd067d485a339c4699f489681a1a90fd5eb78b960a13eda130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:55:41 compute-0 podman[241710]: 2026-01-23 18:55:41.50829603 +0000 UTC m=+1.375797681 container start 1f5efc940fc8a5bd067d485a339c4699f489681a1a90fd5eb78b960a13eda130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:55:41 compute-0 podman[241710]: 2026-01-23 18:55:41.627605915 +0000 UTC m=+1.495107566 container attach 1f5efc940fc8a5bd067d485a339c4699f489681a1a90fd5eb78b960a13eda130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:55:41 compute-0 nifty_euler[241726]: {
Jan 23 18:55:41 compute-0 nifty_euler[241726]:     "0": [
Jan 23 18:55:41 compute-0 nifty_euler[241726]:         {
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "devices": [
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "/dev/loop3"
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             ],
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "lv_name": "ceph_lv0",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "lv_size": "21470642176",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "name": "ceph_lv0",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "tags": {
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.cluster_name": "ceph",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.crush_device_class": "",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.encrypted": "0",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.objectstore": "bluestore",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.osd_id": "0",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.type": "block",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.vdo": "0",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.with_tpm": "0"
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             },
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "type": "block",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "vg_name": "ceph_vg0"
Jan 23 18:55:41 compute-0 nifty_euler[241726]:         }
Jan 23 18:55:41 compute-0 nifty_euler[241726]:     ],
Jan 23 18:55:41 compute-0 nifty_euler[241726]:     "1": [
Jan 23 18:55:41 compute-0 nifty_euler[241726]:         {
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "devices": [
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "/dev/loop4"
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             ],
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "lv_name": "ceph_lv1",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "lv_size": "21470642176",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "name": "ceph_lv1",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "tags": {
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.cluster_name": "ceph",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.crush_device_class": "",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.encrypted": "0",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.objectstore": "bluestore",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.osd_id": "1",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.type": "block",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.vdo": "0",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.with_tpm": "0"
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             },
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "type": "block",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "vg_name": "ceph_vg1"
Jan 23 18:55:41 compute-0 nifty_euler[241726]:         }
Jan 23 18:55:41 compute-0 nifty_euler[241726]:     ],
Jan 23 18:55:41 compute-0 nifty_euler[241726]:     "2": [
Jan 23 18:55:41 compute-0 nifty_euler[241726]:         {
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "devices": [
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "/dev/loop5"
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             ],
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "lv_name": "ceph_lv2",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "lv_size": "21470642176",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "name": "ceph_lv2",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "tags": {
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.cluster_name": "ceph",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.crush_device_class": "",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.encrypted": "0",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.objectstore": "bluestore",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.osd_id": "2",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.type": "block",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.vdo": "0",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:                 "ceph.with_tpm": "0"
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             },
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "type": "block",
Jan 23 18:55:41 compute-0 nifty_euler[241726]:             "vg_name": "ceph_vg2"
Jan 23 18:55:41 compute-0 nifty_euler[241726]:         }
Jan 23 18:55:41 compute-0 nifty_euler[241726]:     ]
Jan 23 18:55:41 compute-0 nifty_euler[241726]: }
Jan 23 18:55:41 compute-0 systemd[1]: libpod-1f5efc940fc8a5bd067d485a339c4699f489681a1a90fd5eb78b960a13eda130.scope: Deactivated successfully.
Jan 23 18:55:41 compute-0 podman[241710]: 2026-01-23 18:55:41.81834318 +0000 UTC m=+1.685844841 container died 1f5efc940fc8a5bd067d485a339c4699f489681a1a90fd5eb78b960a13eda130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:55:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-2da4ffa90fb28c9bb1106c7f69f922c892025a801224bc61ee20670481a6fdaa-merged.mount: Deactivated successfully.
Jan 23 18:55:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:42 compute-0 podman[241710]: 2026-01-23 18:55:42.443104153 +0000 UTC m=+2.310605824 container remove 1f5efc940fc8a5bd067d485a339c4699f489681a1a90fd5eb78b960a13eda130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 23 18:55:42 compute-0 systemd[1]: libpod-conmon-1f5efc940fc8a5bd067d485a339c4699f489681a1a90fd5eb78b960a13eda130.scope: Deactivated successfully.
Jan 23 18:55:42 compute-0 sudo[241633]: pam_unix(sudo:session): session closed for user root
Jan 23 18:55:42 compute-0 sudo[241746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:55:42 compute-0 sudo[241746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:55:42 compute-0 sudo[241746]: pam_unix(sudo:session): session closed for user root
Jan 23 18:55:42 compute-0 sudo[241771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:55:42 compute-0 sudo[241771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:55:43 compute-0 podman[241809]: 2026-01-23 18:55:42.910188296 +0000 UTC m=+0.032132893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:55:43 compute-0 podman[241809]: 2026-01-23 18:55:43.247919881 +0000 UTC m=+0.369864478 container create 6e1f7215f404a676f98e6937baef7d0ef3b205dbb9b4f3e8aae583990b127f72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ardinghelli, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:55:43 compute-0 systemd[1]: Started libpod-conmon-6e1f7215f404a676f98e6937baef7d0ef3b205dbb9b4f3e8aae583990b127f72.scope.
Jan 23 18:55:43 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:55:43 compute-0 ceph-mon[75097]: pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:43 compute-0 podman[241809]: 2026-01-23 18:55:43.85201381 +0000 UTC m=+0.973958407 container init 6e1f7215f404a676f98e6937baef7d0ef3b205dbb9b4f3e8aae583990b127f72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ardinghelli, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:55:43 compute-0 podman[241809]: 2026-01-23 18:55:43.859858241 +0000 UTC m=+0.981802818 container start 6e1f7215f404a676f98e6937baef7d0ef3b205dbb9b4f3e8aae583990b127f72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ardinghelli, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:55:43 compute-0 mystifying_ardinghelli[241825]: 167 167
Jan 23 18:55:43 compute-0 systemd[1]: libpod-6e1f7215f404a676f98e6937baef7d0ef3b205dbb9b4f3e8aae583990b127f72.scope: Deactivated successfully.
Jan 23 18:55:43 compute-0 podman[241809]: 2026-01-23 18:55:43.905793519 +0000 UTC m=+1.027738086 container attach 6e1f7215f404a676f98e6937baef7d0ef3b205dbb9b4f3e8aae583990b127f72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:55:43 compute-0 podman[241809]: 2026-01-23 18:55:43.906168149 +0000 UTC m=+1.028112726 container died 6e1f7215f404a676f98e6937baef7d0ef3b205dbb9b4f3e8aae583990b127f72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ardinghelli, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:55:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c37f9ab43dba78c01e7550dfffb492cbcbf5ef522c19eea1214e47838bbcc144-merged.mount: Deactivated successfully.
Jan 23 18:55:44 compute-0 podman[241809]: 2026-01-23 18:55:44.028892737 +0000 UTC m=+1.150837314 container remove 6e1f7215f404a676f98e6937baef7d0ef3b205dbb9b4f3e8aae583990b127f72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ardinghelli, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 18:55:44 compute-0 systemd[1]: libpod-conmon-6e1f7215f404a676f98e6937baef7d0ef3b205dbb9b4f3e8aae583990b127f72.scope: Deactivated successfully.
Jan 23 18:55:44 compute-0 podman[241850]: 2026-01-23 18:55:44.190874191 +0000 UTC m=+0.027372417 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:55:44 compute-0 podman[241850]: 2026-01-23 18:55:44.30625038 +0000 UTC m=+0.142748586 container create daff9367fb24b61053fde29cb9b5a088c54fac2a6e0e4763fd3bef177572997a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:55:44 compute-0 systemd[1]: Started libpod-conmon-daff9367fb24b61053fde29cb9b5a088c54fac2a6e0e4763fd3bef177572997a.scope.
Jan 23 18:55:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:55:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:44 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:55:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a973de2b9ce256b8dc25993ca547e76db3f950d27f3d3f5e2fdd1ae28d71001/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:55:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a973de2b9ce256b8dc25993ca547e76db3f950d27f3d3f5e2fdd1ae28d71001/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:55:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a973de2b9ce256b8dc25993ca547e76db3f950d27f3d3f5e2fdd1ae28d71001/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:55:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a973de2b9ce256b8dc25993ca547e76db3f950d27f3d3f5e2fdd1ae28d71001/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:55:44 compute-0 podman[241850]: 2026-01-23 18:55:44.45447909 +0000 UTC m=+0.290977296 container init daff9367fb24b61053fde29cb9b5a088c54fac2a6e0e4763fd3bef177572997a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_robinson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 18:55:44 compute-0 podman[241850]: 2026-01-23 18:55:44.462951416 +0000 UTC m=+0.299449622 container start daff9367fb24b61053fde29cb9b5a088c54fac2a6e0e4763fd3bef177572997a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 23 18:55:44 compute-0 podman[241850]: 2026-01-23 18:55:44.468730428 +0000 UTC m=+0.305228664 container attach daff9367fb24b61053fde29cb9b5a088c54fac2a6e0e4763fd3bef177572997a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_robinson, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:55:44 compute-0 ceph-mon[75097]: pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:45 compute-0 lvm[241945]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:55:45 compute-0 lvm[241944]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:55:45 compute-0 lvm[241945]: VG ceph_vg1 finished
Jan 23 18:55:45 compute-0 lvm[241944]: VG ceph_vg0 finished
Jan 23 18:55:45 compute-0 lvm[241947]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:55:45 compute-0 lvm[241947]: VG ceph_vg2 finished
Jan 23 18:55:45 compute-0 upbeat_robinson[241866]: {}
Jan 23 18:55:45 compute-0 systemd[1]: libpod-daff9367fb24b61053fde29cb9b5a088c54fac2a6e0e4763fd3bef177572997a.scope: Deactivated successfully.
Jan 23 18:55:45 compute-0 podman[241850]: 2026-01-23 18:55:45.256648403 +0000 UTC m=+1.093146619 container died daff9367fb24b61053fde29cb9b5a088c54fac2a6e0e4763fd3bef177572997a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_robinson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 23 18:55:45 compute-0 systemd[1]: libpod-daff9367fb24b61053fde29cb9b5a088c54fac2a6e0e4763fd3bef177572997a.scope: Consumed 1.332s CPU time.
Jan 23 18:55:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a973de2b9ce256b8dc25993ca547e76db3f950d27f3d3f5e2fdd1ae28d71001-merged.mount: Deactivated successfully.
Jan 23 18:55:45 compute-0 podman[241850]: 2026-01-23 18:55:45.477334758 +0000 UTC m=+1.313832964 container remove daff9367fb24b61053fde29cb9b5a088c54fac2a6e0e4763fd3bef177572997a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 18:55:45 compute-0 systemd[1]: libpod-conmon-daff9367fb24b61053fde29cb9b5a088c54fac2a6e0e4763fd3bef177572997a.scope: Deactivated successfully.
Jan 23 18:55:45 compute-0 sudo[241771]: pam_unix(sudo:session): session closed for user root
Jan 23 18:55:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:55:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:55:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:55:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:55:45 compute-0 sudo[241962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:55:45 compute-0 sudo[241962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:55:45 compute-0 sudo[241962]: pam_unix(sudo:session): session closed for user root
Jan 23 18:55:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:55:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:55:46 compute-0 ceph-mon[75097]: pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:47 compute-0 podman[241987]: 2026-01-23 18:55:47.015301006 +0000 UTC m=+0.097239448 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 18:55:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 23 18:55:47 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 23 18:55:47 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 23 18:55:47 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 23 18:55:47 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 23 18:55:47 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 23 18:55:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:48 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 23 18:55:48 compute-0 ceph-mon[75097]: pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:55:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:50 compute-0 ceph-mon[75097]: pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:52 compute-0 nova_compute[240119]: 2026-01-23 18:55:52.875 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:55:52 compute-0 nova_compute[240119]: 2026-01-23 18:55:52.875 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:55:53 compute-0 nova_compute[240119]: 2026-01-23 18:55:53.047 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:55:53 compute-0 nova_compute[240119]: 2026-01-23 18:55:53.047 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:55:53 compute-0 nova_compute[240119]: 2026-01-23 18:55:53.048 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:55:53 compute-0 nova_compute[240119]: 2026-01-23 18:55:53.048 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:55:53 compute-0 nova_compute[240119]: 2026-01-23 18:55:53.048 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:55:53 compute-0 nova_compute[240119]: 2026-01-23 18:55:53.108 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:55:53 compute-0 nova_compute[240119]: 2026-01-23 18:55:53.109 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:55:53 compute-0 nova_compute[240119]: 2026-01-23 18:55:53.109 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:55:53 compute-0 nova_compute[240119]: 2026-01-23 18:55:53.109 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 18:55:53 compute-0 nova_compute[240119]: 2026-01-23 18:55:53.110 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 18:55:53 compute-0 ceph-mon[75097]: pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 18:55:53 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3203240040' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:55:53 compute-0 nova_compute[240119]: 2026-01-23 18:55:53.685 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 18:55:53 compute-0 nova_compute[240119]: 2026-01-23 18:55:53.835 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 18:55:53 compute-0 nova_compute[240119]: 2026-01-23 18:55:53.836 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5107MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 18:55:53 compute-0 nova_compute[240119]: 2026-01-23 18:55:53.837 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:55:53 compute-0 nova_compute[240119]: 2026-01-23 18:55:53.837 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:55:53 compute-0 nova_compute[240119]: 2026-01-23 18:55:53.974 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 18:55:53 compute-0 nova_compute[240119]: 2026-01-23 18:55:53.974 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 18:55:54 compute-0 nova_compute[240119]: 2026-01-23 18:55:54.004 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 18:55:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:55:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:54 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3203240040' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:55:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 18:55:54 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1183298496' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:55:54 compute-0 nova_compute[240119]: 2026-01-23 18:55:54.575 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 18:55:54 compute-0 nova_compute[240119]: 2026-01-23 18:55:54.581 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 18:55:54 compute-0 nova_compute[240119]: 2026-01-23 18:55:54.617 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 18:55:54 compute-0 nova_compute[240119]: 2026-01-23 18:55:54.619 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 18:55:54 compute-0 nova_compute[240119]: 2026-01-23 18:55:54.620 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:55:54 compute-0 nova_compute[240119]: 2026-01-23 18:55:54.920 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:55:54 compute-0 nova_compute[240119]: 2026-01-23 18:55:54.921 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 18:55:54 compute-0 nova_compute[240119]: 2026-01-23 18:55:54.921 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 18:55:54 compute-0 nova_compute[240119]: 2026-01-23 18:55:54.937 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 18:55:54 compute-0 nova_compute[240119]: 2026-01-23 18:55:54.937 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:55:54 compute-0 nova_compute[240119]: 2026-01-23 18:55:54.937 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:55:54 compute-0 nova_compute[240119]: 2026-01-23 18:55:54.938 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 18:55:55 compute-0 ceph-mon[75097]: pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:55 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1183298496' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:55:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 18:55:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2198471831' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 18:55:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 18:55:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2198471831' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 18:55:57 compute-0 podman[242057]: 2026-01-23 18:55:57.020862696 +0000 UTC m=+0.083351340 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:55:57 compute-0 ceph-mon[75097]: pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/2198471831' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 18:55:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/2198471831' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 18:55:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:55:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:55:59 compute-0 ceph-mon[75097]: pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:56:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:56:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:56:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:56:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:56:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:56:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:00 compute-0 ceph-mon[75097]: pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:03 compute-0 ceph-mon[75097]: pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:56:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:05 compute-0 ceph-mon[75097]: pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:07 compute-0 ceph-mon[75097]: pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:08 compute-0 ceph-mon[75097]: pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:56:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:11 compute-0 ceph-mon[75097]: pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:13 compute-0 ceph-mon[75097]: pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:56:13.577 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:56:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:56:13.578 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:56:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:56:13.578 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:56:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:56:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:14 compute-0 ceph-mon[75097]: pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:17 compute-0 ceph-mon[75097]: pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:18 compute-0 podman[242076]: 2026-01-23 18:56:18.033694047 +0000 UTC m=+0.098525690 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 23 18:56:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:56:19 compute-0 ceph-mon[75097]: pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:21 compute-0 ceph-mon[75097]: pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:23 compute-0 ceph-mon[75097]: pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:56:25 compute-0 ceph-mon[75097]: pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:26 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:56:26.610 155616 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:3d:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:eb:2e:86:1a:4d'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 18:56:26 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:56:26.612 155616 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 18:56:26 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:56:26.614 155616 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d273cd83-f2f8-4b91-b18f-957051ff2a6c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 18:56:27 compute-0 ceph-mon[75097]: pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:27 compute-0 podman[242102]: 2026-01-23 18:56:27.988719773 +0000 UTC m=+0.077331604 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 23 18:56:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:56:28
Jan 23 18:56:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:56:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:56:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'images', 'vms']
Jan 23 18:56:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:56:29 compute-0 ceph-mon[75097]: pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:56:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:56:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:56:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:56:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:56:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:56:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:56:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:56:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:56:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:56:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:56:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:56:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:56:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:56:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:56:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:56:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:56:31 compute-0 ceph-mon[75097]: pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:32 compute-0 ceph-mon[75097]: pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Jan 23 18:56:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:56:35 compute-0 ceph-mon[75097]: pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Jan 23 18:56:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 19 op/s
Jan 23 18:56:36 compute-0 ceph-mon[75097]: pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 19 op/s
Jan 23 18:56:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 19 op/s
Jan 23 18:56:39 compute-0 ceph-mon[75097]: pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 19 op/s
Jan 23 18:56:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:56:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:56:41 compute-0 ceph-mon[75097]: pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 18:56:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 18:56:43 compute-0 ceph-mon[75097]: pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 18:56:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 18:56:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:56:45 compute-0 ceph-mon[75097]: pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 18:56:45 compute-0 sudo[242122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:56:45 compute-0 sudo[242122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:56:45 compute-0 sudo[242122]: pam_unix(sudo:session): session closed for user root
Jan 23 18:56:45 compute-0 sudo[242147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:56:45 compute-0 sudo[242147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:56:46 compute-0 sudo[242147]: pam_unix(sudo:session): session closed for user root
Jan 23 18:56:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 23 18:56:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 23 18:56:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:56:46 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:56:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:56:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:56:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:56:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Jan 23 18:56:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:56:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:56:46 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:56:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:56:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:56:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:56:46 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:56:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 23 18:56:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:56:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:56:46 compute-0 sudo[242202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:56:46 compute-0 sudo[242202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:56:46 compute-0 sudo[242202]: pam_unix(sudo:session): session closed for user root
Jan 23 18:56:46 compute-0 sudo[242227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:56:46 compute-0 sudo[242227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:56:47 compute-0 podman[242265]: 2026-01-23 18:56:46.957693612 +0000 UTC m=+0.027960341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:56:47 compute-0 podman[242265]: 2026-01-23 18:56:47.070362926 +0000 UTC m=+0.140629575 container create 25e4fb6d577c85ac0326c22edee80f20e737eb9140d456b16b29a847acac4fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_cohen, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 23 18:56:47 compute-0 systemd[1]: Started libpod-conmon-25e4fb6d577c85ac0326c22edee80f20e737eb9140d456b16b29a847acac4fae.scope.
Jan 23 18:56:47 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:56:47 compute-0 podman[242265]: 2026-01-23 18:56:47.235881868 +0000 UTC m=+0.306148547 container init 25e4fb6d577c85ac0326c22edee80f20e737eb9140d456b16b29a847acac4fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:56:47 compute-0 podman[242265]: 2026-01-23 18:56:47.244313043 +0000 UTC m=+0.314579702 container start 25e4fb6d577c85ac0326c22edee80f20e737eb9140d456b16b29a847acac4fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:56:47 compute-0 objective_cohen[242281]: 167 167
Jan 23 18:56:47 compute-0 systemd[1]: libpod-25e4fb6d577c85ac0326c22edee80f20e737eb9140d456b16b29a847acac4fae.scope: Deactivated successfully.
Jan 23 18:56:47 compute-0 podman[242265]: 2026-01-23 18:56:47.556325801 +0000 UTC m=+0.626592470 container attach 25e4fb6d577c85ac0326c22edee80f20e737eb9140d456b16b29a847acac4fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_cohen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:56:47 compute-0 podman[242265]: 2026-01-23 18:56:47.557095799 +0000 UTC m=+0.627362448 container died 25e4fb6d577c85ac0326c22edee80f20e737eb9140d456b16b29a847acac4fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_cohen, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:56:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba187c24a52540977709bd279a1cca1ddd204f2f01314c5d045ba7220f15b188-merged.mount: Deactivated successfully.
Jan 23 18:56:47 compute-0 ceph-mon[75097]: pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Jan 23 18:56:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:56:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:56:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:56:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:56:47 compute-0 podman[242265]: 2026-01-23 18:56:47.738916707 +0000 UTC m=+0.809183366 container remove 25e4fb6d577c85ac0326c22edee80f20e737eb9140d456b16b29a847acac4fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_cohen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:56:47 compute-0 systemd[1]: libpod-conmon-25e4fb6d577c85ac0326c22edee80f20e737eb9140d456b16b29a847acac4fae.scope: Deactivated successfully.
Jan 23 18:56:47 compute-0 podman[242305]: 2026-01-23 18:56:47.880450644 +0000 UTC m=+0.024807456 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:56:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 23 18:56:48 compute-0 podman[242305]: 2026-01-23 18:56:48.820207109 +0000 UTC m=+0.964563901 container create 152717db8ebea09077fd74155c27c5ecaba56e5a0046f66c5978788b9c314950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_archimedes, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:56:48 compute-0 ceph-mon[75097]: pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 23 18:56:49 compute-0 systemd[1]: Started libpod-conmon-152717db8ebea09077fd74155c27c5ecaba56e5a0046f66c5978788b9c314950.scope.
Jan 23 18:56:49 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fc5225f1eb5cb309eb01f14f563248fc40382b419699b3842d231fdfc504a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fc5225f1eb5cb309eb01f14f563248fc40382b419699b3842d231fdfc504a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fc5225f1eb5cb309eb01f14f563248fc40382b419699b3842d231fdfc504a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fc5225f1eb5cb309eb01f14f563248fc40382b419699b3842d231fdfc504a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fc5225f1eb5cb309eb01f14f563248fc40382b419699b3842d231fdfc504a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:56:49 compute-0 sshd-session[242350]: Connection closed by 111.47.65.219 port 62985
Jan 23 18:56:49 compute-0 podman[242305]: 2026-01-23 18:56:49.892596945 +0000 UTC m=+2.036953837 container init 152717db8ebea09077fd74155c27c5ecaba56e5a0046f66c5978788b9c314950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_archimedes, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 23 18:56:49 compute-0 podman[242319]: 2026-01-23 18:56:49.894785009 +0000 UTC m=+1.037649201 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 23 18:56:49 compute-0 podman[242305]: 2026-01-23 18:56:49.90593465 +0000 UTC m=+2.050291472 container start 152717db8ebea09077fd74155c27c5ecaba56e5a0046f66c5978788b9c314950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:56:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:56:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 23 18:56:50 compute-0 keen_archimedes[242338]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:56:50 compute-0 keen_archimedes[242338]: --> All data devices are unavailable
Jan 23 18:56:50 compute-0 systemd[1]: libpod-152717db8ebea09077fd74155c27c5ecaba56e5a0046f66c5978788b9c314950.scope: Deactivated successfully.
Jan 23 18:56:51 compute-0 nova_compute[240119]: 2026-01-23 18:56:51.350 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:56:51 compute-0 nova_compute[240119]: 2026-01-23 18:56:51.352 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:56:52 compute-0 nova_compute[240119]: 2026-01-23 18:56:52.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:56:52 compute-0 nova_compute[240119]: 2026-01-23 18:56:52.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:56:52 compute-0 nova_compute[240119]: 2026-01-23 18:56:52.370 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:56:52 compute-0 nova_compute[240119]: 2026-01-23 18:56:52.371 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:56:52 compute-0 nova_compute[240119]: 2026-01-23 18:56:52.371 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:56:52 compute-0 nova_compute[240119]: 2026-01-23 18:56:52.371 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 18:56:52 compute-0 nova_compute[240119]: 2026-01-23 18:56:52.371 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 18:56:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:53 compute-0 podman[242305]: 2026-01-23 18:56:53.12779936 +0000 UTC m=+5.272156202 container attach 152717db8ebea09077fd74155c27c5ecaba56e5a0046f66c5978788b9c314950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_archimedes, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:56:53 compute-0 podman[242305]: 2026-01-23 18:56:53.129675795 +0000 UTC m=+5.274032587 container died 152717db8ebea09077fd74155c27c5ecaba56e5a0046f66c5978788b9c314950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:56:53 compute-0 ceph-mon[75097]: pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 23 18:56:53 compute-0 ceph-mon[75097]: pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 18:56:53 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2540798199' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:56:53 compute-0 nova_compute[240119]: 2026-01-23 18:56:53.873 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 18:56:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9fc5225f1eb5cb309eb01f14f563248fc40382b419699b3842d231fdfc504a2-merged.mount: Deactivated successfully.
Jan 23 18:56:54 compute-0 nova_compute[240119]: 2026-01-23 18:56:54.054 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 18:56:54 compute-0 nova_compute[240119]: 2026-01-23 18:56:54.055 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5072MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 18:56:54 compute-0 nova_compute[240119]: 2026-01-23 18:56:54.055 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:56:54 compute-0 nova_compute[240119]: 2026-01-23 18:56:54.055 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:56:54 compute-0 nova_compute[240119]: 2026-01-23 18:56:54.116 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 18:56:54 compute-0 nova_compute[240119]: 2026-01-23 18:56:54.116 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 18:56:54 compute-0 nova_compute[240119]: 2026-01-23 18:56:54.134 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 18:56:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 18:56:55 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4093870420' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:56:55 compute-0 nova_compute[240119]: 2026-01-23 18:56:55.075 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.941s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 18:56:55 compute-0 nova_compute[240119]: 2026-01-23 18:56:55.083 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 18:56:55 compute-0 nova_compute[240119]: 2026-01-23 18:56:55.102 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 18:56:55 compute-0 nova_compute[240119]: 2026-01-23 18:56:55.106 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 18:56:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:56:55 compute-0 nova_compute[240119]: 2026-01-23 18:56:55.106 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:56:56 compute-0 nova_compute[240119]: 2026-01-23 18:56:56.107 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:56:56 compute-0 nova_compute[240119]: 2026-01-23 18:56:56.108 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 18:56:56 compute-0 nova_compute[240119]: 2026-01-23 18:56:56.108 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 18:56:56 compute-0 nova_compute[240119]: 2026-01-23 18:56:56.124 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 18:56:56 compute-0 nova_compute[240119]: 2026-01-23 18:56:56.125 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:56:56 compute-0 nova_compute[240119]: 2026-01-23 18:56:56.126 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:56:56 compute-0 nova_compute[240119]: 2026-01-23 18:56:56.126 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:56:56 compute-0 nova_compute[240119]: 2026-01-23 18:56:56.127 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:56:56 compute-0 nova_compute[240119]: 2026-01-23 18:56:56.127 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 18:56:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:56 compute-0 podman[242305]: 2026-01-23 18:56:56.708879266 +0000 UTC m=+8.853236108 container remove 152717db8ebea09077fd74155c27c5ecaba56e5a0046f66c5978788b9c314950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 23 18:56:56 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2540798199' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:56:56 compute-0 sudo[242227]: pam_unix(sudo:session): session closed for user root
Jan 23 18:56:56 compute-0 systemd[1]: libpod-conmon-152717db8ebea09077fd74155c27c5ecaba56e5a0046f66c5978788b9c314950.scope: Deactivated successfully.
Jan 23 18:56:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 18:56:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4113931689' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 18:56:56 compute-0 sudo[242425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:56:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 18:56:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4113931689' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 18:56:56 compute-0 sudo[242425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:56:56 compute-0 sudo[242425]: pam_unix(sudo:session): session closed for user root
Jan 23 18:56:56 compute-0 sudo[242450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:56:56 compute-0 sudo[242450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:56:57 compute-0 podman[242488]: 2026-01-23 18:56:57.212293126 +0000 UTC m=+0.021381342 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:56:57 compute-0 podman[242488]: 2026-01-23 18:56:57.655032503 +0000 UTC m=+0.464120699 container create aa6ee7f685e9a5b29709e4b21a1091a5e069f3ab9d4ca65204381ea990c98259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 23 18:56:57 compute-0 systemd[1]: Started libpod-conmon-aa6ee7f685e9a5b29709e4b21a1091a5e069f3ab9d4ca65204381ea990c98259.scope.
Jan 23 18:56:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:56:58 compute-0 podman[242488]: 2026-01-23 18:56:58.003761098 +0000 UTC m=+0.812849324 container init aa6ee7f685e9a5b29709e4b21a1091a5e069f3ab9d4ca65204381ea990c98259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:56:58 compute-0 podman[242488]: 2026-01-23 18:56:58.01201664 +0000 UTC m=+0.821104836 container start aa6ee7f685e9a5b29709e4b21a1091a5e069f3ab9d4ca65204381ea990c98259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 23 18:56:58 compute-0 inspiring_mirzakhani[242505]: 167 167
Jan 23 18:56:58 compute-0 systemd[1]: libpod-aa6ee7f685e9a5b29709e4b21a1091a5e069f3ab9d4ca65204381ea990c98259.scope: Deactivated successfully.
Jan 23 18:56:58 compute-0 ceph-mon[75097]: pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:58 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4093870420' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:56:58 compute-0 ceph-mon[75097]: pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:58 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/4113931689' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 18:56:58 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/4113931689' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 18:56:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:58 compute-0 podman[242488]: 2026-01-23 18:56:58.445684849 +0000 UTC m=+1.254773045 container attach aa6ee7f685e9a5b29709e4b21a1091a5e069f3ab9d4ca65204381ea990c98259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:56:58 compute-0 podman[242488]: 2026-01-23 18:56:58.446066328 +0000 UTC m=+1.255154524 container died aa6ee7f685e9a5b29709e4b21a1091a5e069f3ab9d4ca65204381ea990c98259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 23 18:56:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ab8798ad677761eb3e4ee1df41816966bace4eebae6c32e7189439a5ec623ab-merged.mount: Deactivated successfully.
Jan 23 18:56:58 compute-0 podman[242488]: 2026-01-23 18:56:58.915863674 +0000 UTC m=+1.724951870 container remove aa6ee7f685e9a5b29709e4b21a1091a5e069f3ab9d4ca65204381ea990c98259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 18:56:58 compute-0 systemd[1]: libpod-conmon-aa6ee7f685e9a5b29709e4b21a1091a5e069f3ab9d4ca65204381ea990c98259.scope: Deactivated successfully.
Jan 23 18:56:59 compute-0 podman[242510]: 2026-01-23 18:56:59.039069146 +0000 UTC m=+0.983119888 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 23 18:56:59 compute-0 podman[242542]: 2026-01-23 18:56:59.079845687 +0000 UTC m=+0.028785488 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:56:59 compute-0 ceph-mon[75097]: pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:56:59 compute-0 podman[242542]: 2026-01-23 18:56:59.376238167 +0000 UTC m=+0.325177948 container create e2f4ab393d3031205b7c6a6e5fb6eb0d088ad25136d59e5fd61b420dfc2a11fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_hopper, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 23 18:56:59 compute-0 systemd[1]: Started libpod-conmon-e2f4ab393d3031205b7c6a6e5fb6eb0d088ad25136d59e5fd61b420dfc2a11fe.scope.
Jan 23 18:56:59 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:56:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c026ef23106683b49be784cc6bb3bca02b942eb07929d5cc0e4cbad93001ea0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:56:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c026ef23106683b49be784cc6bb3bca02b942eb07929d5cc0e4cbad93001ea0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:56:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c026ef23106683b49be784cc6bb3bca02b942eb07929d5cc0e4cbad93001ea0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:56:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c026ef23106683b49be784cc6bb3bca02b942eb07929d5cc0e4cbad93001ea0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:57:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:57:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:57:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:57:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:57:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:57:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:57:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:57:00 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 23 18:57:00 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:57:00.801500) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 18:57:00 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 23 18:57:00 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194620801582, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1365, "num_deletes": 251, "total_data_size": 2137230, "memory_usage": 2177608, "flush_reason": "Manual Compaction"}
Jan 23 18:57:00 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 23 18:57:00 compute-0 podman[242542]: 2026-01-23 18:57:00.80390357 +0000 UTC m=+1.752843361 container init e2f4ab393d3031205b7c6a6e5fb6eb0d088ad25136d59e5fd61b420dfc2a11fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 23 18:57:00 compute-0 podman[242542]: 2026-01-23 18:57:00.814702765 +0000 UTC m=+1.763642546 container start e2f4ab393d3031205b7c6a6e5fb6eb0d088ad25136d59e5fd61b420dfc2a11fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_hopper, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:57:00 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194620890921, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2105904, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15055, "largest_seqno": 16419, "table_properties": {"data_size": 2099543, "index_size": 3561, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13360, "raw_average_key_size": 19, "raw_value_size": 2086719, "raw_average_value_size": 3077, "num_data_blocks": 163, "num_entries": 678, "num_filter_entries": 678, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769194475, "oldest_key_time": 1769194475, "file_creation_time": 1769194620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:57:00 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 89464 microseconds, and 5698 cpu microseconds.
Jan 23 18:57:00 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:57:00.890965) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2105904 bytes OK
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:57:00.890987) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:57:01.086236) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:57:01.086297) EVENT_LOG_v1 {"time_micros": 1769194621086284, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:57:01.086328) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2131144, prev total WAL file size 2157960, number of live WAL files 2.
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:57:01.088763) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2056KB)], [35(7544KB)]
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194621088845, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9831670, "oldest_snapshot_seqno": -1}
Jan 23 18:57:01 compute-0 podman[242542]: 2026-01-23 18:57:01.134234534 +0000 UTC m=+2.083174315 container attach e2f4ab393d3031205b7c6a6e5fb6eb0d088ad25136d59e5fd61b420dfc2a11fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 18:57:01 compute-0 crazy_hopper[242561]: {
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:     "0": [
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:         {
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "devices": [
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "/dev/loop3"
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             ],
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "lv_name": "ceph_lv0",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "lv_size": "21470642176",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "name": "ceph_lv0",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "tags": {
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.cluster_name": "ceph",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.crush_device_class": "",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.encrypted": "0",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.objectstore": "bluestore",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.osd_id": "0",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.type": "block",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.vdo": "0",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.with_tpm": "0"
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             },
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "type": "block",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "vg_name": "ceph_vg0"
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:         }
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:     ],
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:     "1": [
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:         {
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "devices": [
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "/dev/loop4"
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             ],
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "lv_name": "ceph_lv1",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "lv_size": "21470642176",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "name": "ceph_lv1",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "tags": {
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.cluster_name": "ceph",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.crush_device_class": "",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.encrypted": "0",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.objectstore": "bluestore",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.osd_id": "1",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.type": "block",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.vdo": "0",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.with_tpm": "0"
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             },
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "type": "block",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "vg_name": "ceph_vg1"
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:         }
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:     ],
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:     "2": [
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:         {
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "devices": [
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "/dev/loop5"
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             ],
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "lv_name": "ceph_lv2",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "lv_size": "21470642176",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "name": "ceph_lv2",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "tags": {
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.cluster_name": "ceph",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.crush_device_class": "",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.encrypted": "0",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.objectstore": "bluestore",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.osd_id": "2",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.type": "block",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.vdo": "0",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:                 "ceph.with_tpm": "0"
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             },
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "type": "block",
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:             "vg_name": "ceph_vg2"
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:         }
Jan 23 18:57:01 compute-0 crazy_hopper[242561]:     ]
Jan 23 18:57:01 compute-0 crazy_hopper[242561]: }
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4069 keys, 8022266 bytes, temperature: kUnknown
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194621170124, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 8022266, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7992789, "index_size": 18231, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 99488, "raw_average_key_size": 24, "raw_value_size": 7916877, "raw_average_value_size": 1945, "num_data_blocks": 771, "num_entries": 4069, "num_filter_entries": 4069, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 1769194621, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:57:01.170328) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8022266 bytes
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:57:01.172006) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.8 rd, 98.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.4 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(8.5) write-amplify(3.8) OK, records in: 4583, records dropped: 514 output_compression: NoCompression
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:57:01.172025) EVENT_LOG_v1 {"time_micros": 1769194621172016, "job": 16, "event": "compaction_finished", "compaction_time_micros": 81361, "compaction_time_cpu_micros": 35078, "output_level": 6, "num_output_files": 1, "total_output_size": 8022266, "num_input_records": 4583, "num_output_records": 4069, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194621172695, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194621173969, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:57:01.088598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:57:01.174040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:57:01.174044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:57:01.174046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:57:01.174047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:57:01 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-18:57:01.174049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 18:57:01 compute-0 ceph-mon[75097]: pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:01 compute-0 systemd[1]: libpod-e2f4ab393d3031205b7c6a6e5fb6eb0d088ad25136d59e5fd61b420dfc2a11fe.scope: Deactivated successfully.
Jan 23 18:57:01 compute-0 podman[242542]: 2026-01-23 18:57:01.180156301 +0000 UTC m=+2.129096092 container died e2f4ab393d3031205b7c6a6e5fb6eb0d088ad25136d59e5fd61b420dfc2a11fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_hopper, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 18:57:01 compute-0 anacron[7482]: Job `cron.weekly' started
Jan 23 18:57:01 compute-0 anacron[7482]: Job `cron.weekly' terminated
Jan 23 18:57:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c026ef23106683b49be784cc6bb3bca02b942eb07929d5cc0e4cbad93001ea0e-merged.mount: Deactivated successfully.
Jan 23 18:57:01 compute-0 podman[242542]: 2026-01-23 18:57:01.793415315 +0000 UTC m=+2.742355096 container remove e2f4ab393d3031205b7c6a6e5fb6eb0d088ad25136d59e5fd61b420dfc2a11fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_hopper, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:57:01 compute-0 sudo[242450]: pam_unix(sudo:session): session closed for user root
Jan 23 18:57:01 compute-0 systemd[1]: libpod-conmon-e2f4ab393d3031205b7c6a6e5fb6eb0d088ad25136d59e5fd61b420dfc2a11fe.scope: Deactivated successfully.
Jan 23 18:57:01 compute-0 sudo[242583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:57:01 compute-0 sudo[242583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:57:01 compute-0 sudo[242583]: pam_unix(sudo:session): session closed for user root
Jan 23 18:57:01 compute-0 sudo[242608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:57:01 compute-0 sudo[242608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:57:02 compute-0 podman[242645]: 2026-01-23 18:57:02.365319145 +0000 UTC m=+0.074299464 container create b993689938b1228886b5c951120144f4b4e7107c9d82969fea9fedca88c40e7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_buck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 23 18:57:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:02 compute-0 podman[242645]: 2026-01-23 18:57:02.316431425 +0000 UTC m=+0.025411764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:57:02 compute-0 systemd[1]: Started libpod-conmon-b993689938b1228886b5c951120144f4b4e7107c9d82969fea9fedca88c40e7f.scope.
Jan 23 18:57:02 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:57:02 compute-0 ceph-mon[75097]: pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:02 compute-0 podman[242645]: 2026-01-23 18:57:02.920515825 +0000 UTC m=+0.629496144 container init b993689938b1228886b5c951120144f4b4e7107c9d82969fea9fedca88c40e7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 23 18:57:02 compute-0 podman[242645]: 2026-01-23 18:57:02.930037089 +0000 UTC m=+0.639017398 container start b993689938b1228886b5c951120144f4b4e7107c9d82969fea9fedca88c40e7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 18:57:02 compute-0 stoic_buck[242661]: 167 167
Jan 23 18:57:02 compute-0 podman[242645]: 2026-01-23 18:57:02.935968004 +0000 UTC m=+0.644948303 container attach b993689938b1228886b5c951120144f4b4e7107c9d82969fea9fedca88c40e7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_buck, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:57:02 compute-0 systemd[1]: libpod-b993689938b1228886b5c951120144f4b4e7107c9d82969fea9fedca88c40e7f.scope: Deactivated successfully.
Jan 23 18:57:02 compute-0 podman[242645]: 2026-01-23 18:57:02.939676326 +0000 UTC m=+0.648656635 container died b993689938b1228886b5c951120144f4b4e7107c9d82969fea9fedca88c40e7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_buck, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 18:57:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f44861fe51321a3b5d99e795e8e05c14afb7392e761d0666e5f51770a5912cc7-merged.mount: Deactivated successfully.
Jan 23 18:57:03 compute-0 podman[242645]: 2026-01-23 18:57:03.068488526 +0000 UTC m=+0.777468825 container remove b993689938b1228886b5c951120144f4b4e7107c9d82969fea9fedca88c40e7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 18:57:03 compute-0 systemd[1]: libpod-conmon-b993689938b1228886b5c951120144f4b4e7107c9d82969fea9fedca88c40e7f.scope: Deactivated successfully.
Jan 23 18:57:03 compute-0 podman[242687]: 2026-01-23 18:57:03.242800551 +0000 UTC m=+0.045284181 container create 06833f3f0f73e91cdfab86ab3e3187fa811935b1924466eb05cc645aff37f4e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kapitsa, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 23 18:57:03 compute-0 systemd[1]: Started libpod-conmon-06833f3f0f73e91cdfab86ab3e3187fa811935b1924466eb05cc645aff37f4e6.scope.
Jan 23 18:57:03 compute-0 podman[242687]: 2026-01-23 18:57:03.22153471 +0000 UTC m=+0.024018420 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:57:03 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b475d1479a88aaff79cf690e74ea0e2ff63fa98fc9d90e89ddc43c95e8bd0ec0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b475d1479a88aaff79cf690e74ea0e2ff63fa98fc9d90e89ddc43c95e8bd0ec0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b475d1479a88aaff79cf690e74ea0e2ff63fa98fc9d90e89ddc43c95e8bd0ec0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b475d1479a88aaff79cf690e74ea0e2ff63fa98fc9d90e89ddc43c95e8bd0ec0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:57:03 compute-0 podman[242687]: 2026-01-23 18:57:03.340408836 +0000 UTC m=+0.142892456 container init 06833f3f0f73e91cdfab86ab3e3187fa811935b1924466eb05cc645aff37f4e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 18:57:03 compute-0 podman[242687]: 2026-01-23 18:57:03.348817252 +0000 UTC m=+0.151300872 container start 06833f3f0f73e91cdfab86ab3e3187fa811935b1924466eb05cc645aff37f4e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 23 18:57:03 compute-0 podman[242687]: 2026-01-23 18:57:03.353821095 +0000 UTC m=+0.156304735 container attach 06833f3f0f73e91cdfab86ab3e3187fa811935b1924466eb05cc645aff37f4e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:57:04 compute-0 lvm[242782]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:57:04 compute-0 lvm[242783]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:57:04 compute-0 lvm[242782]: VG ceph_vg0 finished
Jan 23 18:57:04 compute-0 lvm[242783]: VG ceph_vg1 finished
Jan 23 18:57:04 compute-0 lvm[242785]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:57:04 compute-0 lvm[242785]: VG ceph_vg2 finished
Jan 23 18:57:04 compute-0 elegant_kapitsa[242703]: {}
Jan 23 18:57:04 compute-0 systemd[1]: libpod-06833f3f0f73e91cdfab86ab3e3187fa811935b1924466eb05cc645aff37f4e6.scope: Deactivated successfully.
Jan 23 18:57:04 compute-0 systemd[1]: libpod-06833f3f0f73e91cdfab86ab3e3187fa811935b1924466eb05cc645aff37f4e6.scope: Consumed 1.411s CPU time.
Jan 23 18:57:04 compute-0 podman[242687]: 2026-01-23 18:57:04.220189029 +0000 UTC m=+1.022672659 container died 06833f3f0f73e91cdfab86ab3e3187fa811935b1924466eb05cc645aff37f4e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kapitsa, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:57:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b475d1479a88aaff79cf690e74ea0e2ff63fa98fc9d90e89ddc43c95e8bd0ec0-merged.mount: Deactivated successfully.
Jan 23 18:57:04 compute-0 podman[242687]: 2026-01-23 18:57:04.316132273 +0000 UTC m=+1.118615893 container remove 06833f3f0f73e91cdfab86ab3e3187fa811935b1924466eb05cc645aff37f4e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kapitsa, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 23 18:57:04 compute-0 systemd[1]: libpod-conmon-06833f3f0f73e91cdfab86ab3e3187fa811935b1924466eb05cc645aff37f4e6.scope: Deactivated successfully.
Jan 23 18:57:04 compute-0 sudo[242608]: pam_unix(sudo:session): session closed for user root
Jan 23 18:57:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:57:04 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:57:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:57:04 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:57:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:04 compute-0 sudo[242800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:57:04 compute-0 sudo[242800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:57:04 compute-0 sudo[242800]: pam_unix(sudo:session): session closed for user root
Jan 23 18:57:05 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:57:05 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:57:05 compute-0 ceph-mon[75097]: pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:57:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:07 compute-0 ceph-mon[75097]: pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:09 compute-0 ceph-mon[75097]: pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:57:11 compute-0 ceph-mon[75097]: pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:57:13.578 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:57:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:57:13.579 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:57:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:57:13.579 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:57:14 compute-0 ceph-mon[75097]: pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:57:15 compute-0 ceph-mon[75097]: pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:17 compute-0 ceph-mon[75097]: pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:19 compute-0 ceph-mon[75097]: pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:20 compute-0 ceph-mon[75097]: pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:57:21 compute-0 podman[242825]: 2026-01-23 18:57:21.047697055 +0000 UTC m=+0.132630205 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 18:57:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:23 compute-0 ceph-mon[75097]: pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:24 compute-0 ceph-mon[75097]: pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:57:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:26 compute-0 ceph-mon[75097]: pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:28 compute-0 ceph-mon[75097]: pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:57:28
Jan 23 18:57:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:57:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:57:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'backups', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'default.rgw.log']
Jan 23 18:57:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:57:29 compute-0 podman[242851]: 2026-01-23 18:57:29.977308024 +0000 UTC m=+0.054204801 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 18:57:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:57:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:57:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:57:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:57:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:57:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:57:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:30 compute-0 ceph-mon[75097]: pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:57:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:57:30 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:57:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:57:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:57:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:57:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:57:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:57:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:57:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:57:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:57:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:32 compute-0 ceph-mon[75097]: pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:35 compute-0 ceph-mon[75097]: pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:57:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:37 compute-0 ceph-mon[75097]: pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:38 compute-0 ceph-mon[75097]: pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:57:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:57:41 compute-0 ceph-mon[75097]: pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:42 compute-0 ceph-mon[75097]: pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:45 compute-0 ceph-mon[75097]: pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:57:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:47 compute-0 ceph-mon[75097]: pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:48 compute-0 ceph-mon[75097]: pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:57:51 compute-0 ceph-mon[75097]: pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:52 compute-0 podman[242871]: 2026-01-23 18:57:52.068693786 +0000 UTC m=+0.126687618 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 23 18:57:52 compute-0 nova_compute[240119]: 2026-01-23 18:57:52.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:57:52 compute-0 nova_compute[240119]: 2026-01-23 18:57:52.370 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:57:52 compute-0 nova_compute[240119]: 2026-01-23 18:57:52.371 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:57:52 compute-0 nova_compute[240119]: 2026-01-23 18:57:52.371 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:57:52 compute-0 nova_compute[240119]: 2026-01-23 18:57:52.371 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 18:57:52 compute-0 nova_compute[240119]: 2026-01-23 18:57:52.371 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 18:57:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 18:57:52 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2559364685' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:57:52 compute-0 nova_compute[240119]: 2026-01-23 18:57:52.903 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 18:57:53 compute-0 nova_compute[240119]: 2026-01-23 18:57:53.062 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 18:57:53 compute-0 nova_compute[240119]: 2026-01-23 18:57:53.063 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5146MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 18:57:53 compute-0 nova_compute[240119]: 2026-01-23 18:57:53.063 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:57:53 compute-0 nova_compute[240119]: 2026-01-23 18:57:53.063 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:57:53 compute-0 nova_compute[240119]: 2026-01-23 18:57:53.136 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 18:57:53 compute-0 nova_compute[240119]: 2026-01-23 18:57:53.137 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 18:57:53 compute-0 nova_compute[240119]: 2026-01-23 18:57:53.155 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 18:57:53 compute-0 ceph-mon[75097]: pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:53 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2559364685' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:57:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 18:57:53 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936026711' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:57:53 compute-0 nova_compute[240119]: 2026-01-23 18:57:53.763 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 18:57:53 compute-0 nova_compute[240119]: 2026-01-23 18:57:53.770 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 18:57:53 compute-0 nova_compute[240119]: 2026-01-23 18:57:53.892 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 18:57:53 compute-0 nova_compute[240119]: 2026-01-23 18:57:53.894 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 18:57:53 compute-0 nova_compute[240119]: 2026-01-23 18:57:53.895 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:57:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:54 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1936026711' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:57:54 compute-0 nova_compute[240119]: 2026-01-23 18:57:54.891 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:57:54 compute-0 nova_compute[240119]: 2026-01-23 18:57:54.891 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:57:54 compute-0 nova_compute[240119]: 2026-01-23 18:57:54.892 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:57:54 compute-0 nova_compute[240119]: 2026-01-23 18:57:54.892 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:57:55 compute-0 nova_compute[240119]: 2026-01-23 18:57:55.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:57:55 compute-0 nova_compute[240119]: 2026-01-23 18:57:55.348 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 18:57:55 compute-0 nova_compute[240119]: 2026-01-23 18:57:55.348 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 18:57:55 compute-0 nova_compute[240119]: 2026-01-23 18:57:55.379 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 18:57:55 compute-0 nova_compute[240119]: 2026-01-23 18:57:55.379 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:57:55 compute-0 nova_compute[240119]: 2026-01-23 18:57:55.380 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:57:55 compute-0 nova_compute[240119]: 2026-01-23 18:57:55.380 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:57:55 compute-0 nova_compute[240119]: 2026-01-23 18:57:55.380 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 18:57:55 compute-0 ceph-mon[75097]: pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:57:56 compute-0 nova_compute[240119]: 2026-01-23 18:57:56.376 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:57:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:56 compute-0 ceph-mon[75097]: pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 18:57:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1670650512' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 18:57:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 18:57:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1670650512' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 18:57:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1670650512' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 18:57:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1670650512' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 18:57:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:57:58 compute-0 ceph-mon[75097]: pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:58:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:58:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:58:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:58:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:58:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:58:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:58:00 compute-0 podman[242943]: 2026-01-23 18:58:00.970410452 +0000 UTC m=+0.054484068 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 18:58:01 compute-0 ceph-mon[75097]: pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:03 compute-0 ceph-mon[75097]: pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:04 compute-0 sudo[242962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:58:04 compute-0 sudo[242962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:58:04 compute-0 sudo[242962]: pam_unix(sudo:session): session closed for user root
Jan 23 18:58:04 compute-0 sudo[242987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 23 18:58:04 compute-0 sudo[242987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:58:04 compute-0 ceph-mon[75097]: pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:05 compute-0 podman[243057]: 2026-01-23 18:58:05.022105476 +0000 UTC m=+0.059997403 container exec 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:58:05 compute-0 podman[243057]: 2026-01-23 18:58:05.129114421 +0000 UTC m=+0.167006378 container exec_died 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 18:58:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:58:05 compute-0 sudo[242987]: pam_unix(sudo:session): session closed for user root
Jan 23 18:58:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:58:05 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:58:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:58:05 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:58:06 compute-0 sudo[243244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:58:06 compute-0 sudo[243244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:58:06 compute-0 sudo[243244]: pam_unix(sudo:session): session closed for user root
Jan 23 18:58:06 compute-0 sudo[243269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:58:06 compute-0 sudo[243269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:58:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:06 compute-0 sudo[243269]: pam_unix(sudo:session): session closed for user root
Jan 23 18:58:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:58:06 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:58:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:58:06 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:58:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:58:06 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:58:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:58:06 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:58:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:58:06 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:58:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:58:06 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:58:06 compute-0 sudo[243326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:58:06 compute-0 sudo[243326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:58:06 compute-0 sudo[243326]: pam_unix(sudo:session): session closed for user root
Jan 23 18:58:06 compute-0 sudo[243351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:58:06 compute-0 sudo[243351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:58:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:58:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:58:06 compute-0 ceph-mon[75097]: pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:58:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:58:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:58:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:58:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:58:06 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:58:07 compute-0 podman[243387]: 2026-01-23 18:58:07.251890865 +0000 UTC m=+0.100765942 container create 7cd8740dbd87b49d96715166d1103fc14eb24a09eb31fcbc9ed4aae0f6c9b245 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hawking, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:58:07 compute-0 podman[243387]: 2026-01-23 18:58:07.179287345 +0000 UTC m=+0.028162482 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:58:07 compute-0 systemd[1]: Started libpod-conmon-7cd8740dbd87b49d96715166d1103fc14eb24a09eb31fcbc9ed4aae0f6c9b245.scope.
Jan 23 18:58:07 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:58:07 compute-0 podman[243387]: 2026-01-23 18:58:07.33273709 +0000 UTC m=+0.181612197 container init 7cd8740dbd87b49d96715166d1103fc14eb24a09eb31fcbc9ed4aae0f6c9b245 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:58:07 compute-0 podman[243387]: 2026-01-23 18:58:07.340190812 +0000 UTC m=+0.189065899 container start 7cd8740dbd87b49d96715166d1103fc14eb24a09eb31fcbc9ed4aae0f6c9b245 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hawking, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Jan 23 18:58:07 compute-0 podman[243387]: 2026-01-23 18:58:07.344004696 +0000 UTC m=+0.192879793 container attach 7cd8740dbd87b49d96715166d1103fc14eb24a09eb31fcbc9ed4aae0f6c9b245 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hawking, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:58:07 compute-0 fervent_hawking[243404]: 167 167
Jan 23 18:58:07 compute-0 systemd[1]: libpod-7cd8740dbd87b49d96715166d1103fc14eb24a09eb31fcbc9ed4aae0f6c9b245.scope: Deactivated successfully.
Jan 23 18:58:07 compute-0 podman[243387]: 2026-01-23 18:58:07.345994915 +0000 UTC m=+0.194869992 container died 7cd8740dbd87b49d96715166d1103fc14eb24a09eb31fcbc9ed4aae0f6c9b245 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 23 18:58:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-03637bd1ad71164674de93b8387460a2a7af8fb29681fb564a09bb78243fc606-merged.mount: Deactivated successfully.
Jan 23 18:58:07 compute-0 podman[243387]: 2026-01-23 18:58:07.387897973 +0000 UTC m=+0.236773050 container remove 7cd8740dbd87b49d96715166d1103fc14eb24a09eb31fcbc9ed4aae0f6c9b245 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 18:58:07 compute-0 systemd[1]: libpod-conmon-7cd8740dbd87b49d96715166d1103fc14eb24a09eb31fcbc9ed4aae0f6c9b245.scope: Deactivated successfully.
Jan 23 18:58:07 compute-0 podman[243426]: 2026-01-23 18:58:07.571821005 +0000 UTC m=+0.047677410 container create 0fe434c46c455b9c8cf98aadca72f30afe27958f12a02d4def6fc525e0c6fd18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lalande, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:58:07 compute-0 systemd[1]: Started libpod-conmon-0fe434c46c455b9c8cf98aadca72f30afe27958f12a02d4def6fc525e0c6fd18.scope.
Jan 23 18:58:07 compute-0 podman[243426]: 2026-01-23 18:58:07.550255226 +0000 UTC m=+0.026111641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:58:07 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:58:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e1f09236467c4561cbd84c178a52edef5c9163783c18172d57d8408b3bc33e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:58:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e1f09236467c4561cbd84c178a52edef5c9163783c18172d57d8408b3bc33e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:58:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e1f09236467c4561cbd84c178a52edef5c9163783c18172d57d8408b3bc33e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:58:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e1f09236467c4561cbd84c178a52edef5c9163783c18172d57d8408b3bc33e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:58:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e1f09236467c4561cbd84c178a52edef5c9163783c18172d57d8408b3bc33e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:58:07 compute-0 podman[243426]: 2026-01-23 18:58:07.745373723 +0000 UTC m=+0.221230158 container init 0fe434c46c455b9c8cf98aadca72f30afe27958f12a02d4def6fc525e0c6fd18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lalande, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Jan 23 18:58:07 compute-0 podman[243426]: 2026-01-23 18:58:07.753745418 +0000 UTC m=+0.229601833 container start 0fe434c46c455b9c8cf98aadca72f30afe27958f12a02d4def6fc525e0c6fd18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lalande, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 23 18:58:07 compute-0 podman[243426]: 2026-01-23 18:58:07.758809032 +0000 UTC m=+0.234665427 container attach 0fe434c46c455b9c8cf98aadca72f30afe27958f12a02d4def6fc525e0c6fd18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lalande, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 23 18:58:08 compute-0 funny_lalande[243442]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:58:08 compute-0 funny_lalande[243442]: --> All data devices are unavailable
Jan 23 18:58:08 compute-0 systemd[1]: libpod-0fe434c46c455b9c8cf98aadca72f30afe27958f12a02d4def6fc525e0c6fd18.scope: Deactivated successfully.
Jan 23 18:58:08 compute-0 podman[243426]: 2026-01-23 18:58:08.284635342 +0000 UTC m=+0.760491737 container died 0fe434c46c455b9c8cf98aadca72f30afe27958f12a02d4def6fc525e0c6fd18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lalande, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 18:58:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-40e1f09236467c4561cbd84c178a52edef5c9163783c18172d57d8408b3bc33e-merged.mount: Deactivated successfully.
Jan 23 18:58:08 compute-0 podman[243426]: 2026-01-23 18:58:08.340983274 +0000 UTC m=+0.816839669 container remove 0fe434c46c455b9c8cf98aadca72f30afe27958f12a02d4def6fc525e0c6fd18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lalande, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 23 18:58:08 compute-0 systemd[1]: libpod-conmon-0fe434c46c455b9c8cf98aadca72f30afe27958f12a02d4def6fc525e0c6fd18.scope: Deactivated successfully.
Jan 23 18:58:08 compute-0 sudo[243351]: pam_unix(sudo:session): session closed for user root
Jan 23 18:58:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:08 compute-0 sudo[243473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:58:08 compute-0 sudo[243473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:58:08 compute-0 sudo[243473]: pam_unix(sudo:session): session closed for user root
Jan 23 18:58:08 compute-0 sudo[243498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:58:08 compute-0 sudo[243498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:58:08 compute-0 podman[243536]: 2026-01-23 18:58:08.747616289 +0000 UTC m=+0.024090562 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:58:08 compute-0 podman[243536]: 2026-01-23 18:58:08.864002444 +0000 UTC m=+0.140476697 container create 7d5d806ab3cac0602393cafa5136db5226c012b9591f08e07eab40dfa88831f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_buck, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:58:08 compute-0 systemd[1]: Started libpod-conmon-7d5d806ab3cac0602393cafa5136db5226c012b9591f08e07eab40dfa88831f6.scope.
Jan 23 18:58:08 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:58:08 compute-0 podman[243536]: 2026-01-23 18:58:08.931834809 +0000 UTC m=+0.208309082 container init 7d5d806ab3cac0602393cafa5136db5226c012b9591f08e07eab40dfa88831f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 23 18:58:08 compute-0 podman[243536]: 2026-01-23 18:58:08.940220174 +0000 UTC m=+0.216694427 container start 7d5d806ab3cac0602393cafa5136db5226c012b9591f08e07eab40dfa88831f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_buck, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:58:08 compute-0 podman[243536]: 2026-01-23 18:58:08.944420567 +0000 UTC m=+0.220894820 container attach 7d5d806ab3cac0602393cafa5136db5226c012b9591f08e07eab40dfa88831f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_buck, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:58:08 compute-0 practical_buck[243552]: 167 167
Jan 23 18:58:08 compute-0 podman[243536]: 2026-01-23 18:58:08.946728134 +0000 UTC m=+0.223202397 container died 7d5d806ab3cac0602393cafa5136db5226c012b9591f08e07eab40dfa88831f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_buck, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:58:08 compute-0 systemd[1]: libpod-7d5d806ab3cac0602393cafa5136db5226c012b9591f08e07eab40dfa88831f6.scope: Deactivated successfully.
Jan 23 18:58:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0bf4e505dd09733f197480ebfeeaa06dbe97b133aca7a0bdcce09c4c0c8e670-merged.mount: Deactivated successfully.
Jan 23 18:58:08 compute-0 podman[243536]: 2026-01-23 18:58:08.982534652 +0000 UTC m=+0.259008905 container remove 7d5d806ab3cac0602393cafa5136db5226c012b9591f08e07eab40dfa88831f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:58:08 compute-0 systemd[1]: libpod-conmon-7d5d806ab3cac0602393cafa5136db5226c012b9591f08e07eab40dfa88831f6.scope: Deactivated successfully.
Jan 23 18:58:09 compute-0 podman[243574]: 2026-01-23 18:58:09.153134917 +0000 UTC m=+0.048112531 container create a3dc7d1cf400307cd76330081c378cebb7565b53b904ef76eef87175531d8b87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 18:58:09 compute-0 podman[243574]: 2026-01-23 18:58:09.127458338 +0000 UTC m=+0.022436042 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:58:09 compute-0 systemd[1]: Started libpod-conmon-a3dc7d1cf400307cd76330081c378cebb7565b53b904ef76eef87175531d8b87.scope.
Jan 23 18:58:09 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04c39f6ca7c2dd510ecd377e128da25134a2125945080852e504ccc1d681b698/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04c39f6ca7c2dd510ecd377e128da25134a2125945080852e504ccc1d681b698/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04c39f6ca7c2dd510ecd377e128da25134a2125945080852e504ccc1d681b698/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04c39f6ca7c2dd510ecd377e128da25134a2125945080852e504ccc1d681b698/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:58:09 compute-0 podman[243574]: 2026-01-23 18:58:09.26572902 +0000 UTC m=+0.160706654 container init a3dc7d1cf400307cd76330081c378cebb7565b53b904ef76eef87175531d8b87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cray, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:58:09 compute-0 podman[243574]: 2026-01-23 18:58:09.275894799 +0000 UTC m=+0.170872413 container start a3dc7d1cf400307cd76330081c378cebb7565b53b904ef76eef87175531d8b87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 18:58:09 compute-0 podman[243574]: 2026-01-23 18:58:09.279501527 +0000 UTC m=+0.174479151 container attach a3dc7d1cf400307cd76330081c378cebb7565b53b904ef76eef87175531d8b87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cray, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:58:09 compute-0 ceph-mon[75097]: pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:09 compute-0 exciting_cray[243590]: {
Jan 23 18:58:09 compute-0 exciting_cray[243590]:     "0": [
Jan 23 18:58:09 compute-0 exciting_cray[243590]:         {
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "devices": [
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "/dev/loop3"
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             ],
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "lv_name": "ceph_lv0",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "lv_size": "21470642176",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "name": "ceph_lv0",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "tags": {
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.cluster_name": "ceph",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.crush_device_class": "",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.encrypted": "0",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.objectstore": "bluestore",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.osd_id": "0",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.type": "block",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.vdo": "0",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.with_tpm": "0"
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             },
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "type": "block",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "vg_name": "ceph_vg0"
Jan 23 18:58:09 compute-0 exciting_cray[243590]:         }
Jan 23 18:58:09 compute-0 exciting_cray[243590]:     ],
Jan 23 18:58:09 compute-0 exciting_cray[243590]:     "1": [
Jan 23 18:58:09 compute-0 exciting_cray[243590]:         {
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "devices": [
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "/dev/loop4"
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             ],
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "lv_name": "ceph_lv1",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "lv_size": "21470642176",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "name": "ceph_lv1",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "tags": {
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.cluster_name": "ceph",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.crush_device_class": "",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.encrypted": "0",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.objectstore": "bluestore",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.osd_id": "1",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.type": "block",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.vdo": "0",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.with_tpm": "0"
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             },
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "type": "block",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "vg_name": "ceph_vg1"
Jan 23 18:58:09 compute-0 exciting_cray[243590]:         }
Jan 23 18:58:09 compute-0 exciting_cray[243590]:     ],
Jan 23 18:58:09 compute-0 exciting_cray[243590]:     "2": [
Jan 23 18:58:09 compute-0 exciting_cray[243590]:         {
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "devices": [
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "/dev/loop5"
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             ],
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "lv_name": "ceph_lv2",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "lv_size": "21470642176",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "name": "ceph_lv2",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "tags": {
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.cluster_name": "ceph",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.crush_device_class": "",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.encrypted": "0",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.objectstore": "bluestore",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.osd_id": "2",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.type": "block",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.vdo": "0",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:                 "ceph.with_tpm": "0"
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             },
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "type": "block",
Jan 23 18:58:09 compute-0 exciting_cray[243590]:             "vg_name": "ceph_vg2"
Jan 23 18:58:09 compute-0 exciting_cray[243590]:         }
Jan 23 18:58:09 compute-0 exciting_cray[243590]:     ]
Jan 23 18:58:09 compute-0 exciting_cray[243590]: }
Jan 23 18:58:09 compute-0 systemd[1]: libpod-a3dc7d1cf400307cd76330081c378cebb7565b53b904ef76eef87175531d8b87.scope: Deactivated successfully.
Jan 23 18:58:09 compute-0 podman[243574]: 2026-01-23 18:58:09.60002199 +0000 UTC m=+0.494999614 container died a3dc7d1cf400307cd76330081c378cebb7565b53b904ef76eef87175531d8b87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 23 18:58:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-04c39f6ca7c2dd510ecd377e128da25134a2125945080852e504ccc1d681b698-merged.mount: Deactivated successfully.
Jan 23 18:58:09 compute-0 podman[243574]: 2026-01-23 18:58:09.646241994 +0000 UTC m=+0.541219608 container remove a3dc7d1cf400307cd76330081c378cebb7565b53b904ef76eef87175531d8b87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:58:09 compute-0 systemd[1]: libpod-conmon-a3dc7d1cf400307cd76330081c378cebb7565b53b904ef76eef87175531d8b87.scope: Deactivated successfully.
Jan 23 18:58:09 compute-0 sudo[243498]: pam_unix(sudo:session): session closed for user root
Jan 23 18:58:09 compute-0 sudo[243609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:58:09 compute-0 sudo[243609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:58:09 compute-0 sudo[243609]: pam_unix(sudo:session): session closed for user root
Jan 23 18:58:09 compute-0 sudo[243634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:58:09 compute-0 sudo[243634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:58:10 compute-0 podman[243673]: 2026-01-23 18:58:10.09610275 +0000 UTC m=+0.041131031 container create 4a42df45d08a6e791e48fb484e56a165887990d83ba479350c2f7cca3db8d23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_gould, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:58:10 compute-0 systemd[1]: Started libpod-conmon-4a42df45d08a6e791e48fb484e56a165887990d83ba479350c2f7cca3db8d23d.scope.
Jan 23 18:58:10 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:58:10 compute-0 podman[243673]: 2026-01-23 18:58:10.170715921 +0000 UTC m=+0.115744232 container init 4a42df45d08a6e791e48fb484e56a165887990d83ba479350c2f7cca3db8d23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_gould, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:58:10 compute-0 podman[243673]: 2026-01-23 18:58:10.077438841 +0000 UTC m=+0.022467142 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:58:10 compute-0 podman[243673]: 2026-01-23 18:58:10.17883521 +0000 UTC m=+0.123863491 container start 4a42df45d08a6e791e48fb484e56a165887990d83ba479350c2f7cca3db8d23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 23 18:58:10 compute-0 podman[243673]: 2026-01-23 18:58:10.182844999 +0000 UTC m=+0.127873300 container attach 4a42df45d08a6e791e48fb484e56a165887990d83ba479350c2f7cca3db8d23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_gould, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:58:10 compute-0 strange_gould[243689]: 167 167
Jan 23 18:58:10 compute-0 systemd[1]: libpod-4a42df45d08a6e791e48fb484e56a165887990d83ba479350c2f7cca3db8d23d.scope: Deactivated successfully.
Jan 23 18:58:10 compute-0 podman[243673]: 2026-01-23 18:58:10.184381956 +0000 UTC m=+0.129410247 container died 4a42df45d08a6e791e48fb484e56a165887990d83ba479350c2f7cca3db8d23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 18:58:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7b2a3b5277568201bf303f9da86f2c63d3dac202f3a500f2925bcc57d75e193-merged.mount: Deactivated successfully.
Jan 23 18:58:10 compute-0 podman[243673]: 2026-01-23 18:58:10.223625119 +0000 UTC m=+0.168653400 container remove 4a42df45d08a6e791e48fb484e56a165887990d83ba479350c2f7cca3db8d23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 23 18:58:10 compute-0 systemd[1]: libpod-conmon-4a42df45d08a6e791e48fb484e56a165887990d83ba479350c2f7cca3db8d23d.scope: Deactivated successfully.
Jan 23 18:58:10 compute-0 podman[243711]: 2026-01-23 18:58:10.39325997 +0000 UTC m=+0.042983596 container create c8960bd12052a1bee9649f5f22bd3f8dd227ffb8d7459c0d2669b36e83dec36b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_raman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 23 18:58:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:10 compute-0 systemd[1]: Started libpod-conmon-c8960bd12052a1bee9649f5f22bd3f8dd227ffb8d7459c0d2669b36e83dec36b.scope.
Jan 23 18:58:10 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cbecf94bde79bda4908c23abb433a3c97ff69459a833711a76461f13d87716/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cbecf94bde79bda4908c23abb433a3c97ff69459a833711a76461f13d87716/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cbecf94bde79bda4908c23abb433a3c97ff69459a833711a76461f13d87716/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cbecf94bde79bda4908c23abb433a3c97ff69459a833711a76461f13d87716/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:58:10 compute-0 podman[243711]: 2026-01-23 18:58:10.465175724 +0000 UTC m=+0.114899370 container init c8960bd12052a1bee9649f5f22bd3f8dd227ffb8d7459c0d2669b36e83dec36b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_raman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 23 18:58:10 compute-0 podman[243711]: 2026-01-23 18:58:10.374664834 +0000 UTC m=+0.024388490 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:58:10 compute-0 podman[243711]: 2026-01-23 18:58:10.473755664 +0000 UTC m=+0.123479290 container start c8960bd12052a1bee9649f5f22bd3f8dd227ffb8d7459c0d2669b36e83dec36b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_raman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:58:10 compute-0 podman[243711]: 2026-01-23 18:58:10.478291416 +0000 UTC m=+0.128015042 container attach c8960bd12052a1bee9649f5f22bd3f8dd227ffb8d7459c0d2669b36e83dec36b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_raman, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 18:58:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:58:11 compute-0 lvm[243806]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:58:11 compute-0 lvm[243804]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:58:11 compute-0 lvm[243806]: VG ceph_vg1 finished
Jan 23 18:58:11 compute-0 lvm[243804]: VG ceph_vg0 finished
Jan 23 18:58:11 compute-0 lvm[243808]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:58:11 compute-0 lvm[243808]: VG ceph_vg2 finished
Jan 23 18:58:11 compute-0 pedantic_raman[243727]: {}
Jan 23 18:58:11 compute-0 systemd[1]: libpod-c8960bd12052a1bee9649f5f22bd3f8dd227ffb8d7459c0d2669b36e83dec36b.scope: Deactivated successfully.
Jan 23 18:58:11 compute-0 systemd[1]: libpod-c8960bd12052a1bee9649f5f22bd3f8dd227ffb8d7459c0d2669b36e83dec36b.scope: Consumed 1.337s CPU time.
Jan 23 18:58:11 compute-0 podman[243711]: 2026-01-23 18:58:11.318695253 +0000 UTC m=+0.968418929 container died c8960bd12052a1bee9649f5f22bd3f8dd227ffb8d7459c0d2669b36e83dec36b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_raman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 18:58:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-42cbecf94bde79bda4908c23abb433a3c97ff69459a833711a76461f13d87716-merged.mount: Deactivated successfully.
Jan 23 18:58:11 compute-0 podman[243711]: 2026-01-23 18:58:11.371769005 +0000 UTC m=+1.021492651 container remove c8960bd12052a1bee9649f5f22bd3f8dd227ffb8d7459c0d2669b36e83dec36b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_raman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:58:11 compute-0 systemd[1]: libpod-conmon-c8960bd12052a1bee9649f5f22bd3f8dd227ffb8d7459c0d2669b36e83dec36b.scope: Deactivated successfully.
Jan 23 18:58:11 compute-0 sudo[243634]: pam_unix(sudo:session): session closed for user root
Jan 23 18:58:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:58:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:58:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:58:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:58:11 compute-0 ceph-mon[75097]: pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:58:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:58:11 compute-0 sudo[243822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:58:11 compute-0 sudo[243822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:58:11 compute-0 sudo[243822]: pam_unix(sudo:session): session closed for user root
Jan 23 18:58:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:13 compute-0 ceph-mon[75097]: pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:58:13.579 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:58:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:58:13.581 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:58:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:58:13.581 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:58:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:14 compute-0 ceph-mon[75097]: pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:58:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:17 compute-0 ceph-mon[75097]: pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:19 compute-0 ceph-mon[75097]: pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:20 compute-0 ceph-mon[75097]: pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:58:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:22 compute-0 ceph-mon[75097]: pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:23 compute-0 podman[243847]: 2026-01-23 18:58:23.022401096 +0000 UTC m=+0.094379377 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 23 18:58:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:24 compute-0 ceph-mon[75097]: pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:58:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:27 compute-0 ceph-mon[75097]: pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:28 compute-0 ceph-mon[75097]: pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:58:28
Jan 23 18:58:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:58:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:58:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'vms', '.mgr', 'cephfs.cephfs.data', 'images']
Jan 23 18:58:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:58:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:58:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:58:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:58:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:58:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:58:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:58:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:58:30 compute-0 ceph-mon[75097]: pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:58:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:58:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:58:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:58:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:58:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:58:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:58:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:58:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:58:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:58:31 compute-0 podman[243874]: 2026-01-23 18:58:31.959495578 +0000 UTC m=+0.044619036 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 23 18:58:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:33 compute-0 ceph-mon[75097]: pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:34 compute-0 ceph-mon[75097]: pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:58:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:37 compute-0 ceph-mon[75097]: pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:39 compute-0 ceph-mon[75097]: pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5157608029245746e-06 of space, bias 4.0, pg target 0.0030189129635094895 quantized to 16 (current 16)
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:58:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:40 compute-0 ceph-mon[75097]: pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:58:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:43 compute-0 ceph-mon[75097]: pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:44 compute-0 ceph-mon[75097]: pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:58:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:46 compute-0 ceph-mon[75097]: pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:48 compute-0 ceph-mon[75097]: pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:58:51 compute-0 nova_compute[240119]: 2026-01-23 18:58:51.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:58:51 compute-0 nova_compute[240119]: 2026-01-23 18:58:51.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 23 18:58:51 compute-0 nova_compute[240119]: 2026-01-23 18:58:51.368 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 23 18:58:51 compute-0 nova_compute[240119]: 2026-01-23 18:58:51.369 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:58:51 compute-0 nova_compute[240119]: 2026-01-23 18:58:51.369 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 23 18:58:51 compute-0 nova_compute[240119]: 2026-01-23 18:58:51.383 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:58:51 compute-0 ceph-mon[75097]: pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:52 compute-0 nova_compute[240119]: 2026-01-23 18:58:52.398 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:58:52 compute-0 nova_compute[240119]: 2026-01-23 18:58:52.429 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:58:52 compute-0 nova_compute[240119]: 2026-01-23 18:58:52.430 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:58:52 compute-0 nova_compute[240119]: 2026-01-23 18:58:52.430 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:58:52 compute-0 nova_compute[240119]: 2026-01-23 18:58:52.430 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 18:58:52 compute-0 nova_compute[240119]: 2026-01-23 18:58:52.431 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 18:58:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:52 compute-0 ceph-mon[75097]: pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 18:58:52 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/510384743' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:58:52 compute-0 nova_compute[240119]: 2026-01-23 18:58:52.990 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 18:58:53 compute-0 nova_compute[240119]: 2026-01-23 18:58:53.159 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 18:58:53 compute-0 nova_compute[240119]: 2026-01-23 18:58:53.161 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5131MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 18:58:53 compute-0 nova_compute[240119]: 2026-01-23 18:58:53.161 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:58:53 compute-0 nova_compute[240119]: 2026-01-23 18:58:53.161 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:58:53 compute-0 nova_compute[240119]: 2026-01-23 18:58:53.223 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 18:58:53 compute-0 nova_compute[240119]: 2026-01-23 18:58:53.223 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 18:58:53 compute-0 nova_compute[240119]: 2026-01-23 18:58:53.239 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 18:58:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 18:58:53 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/42458625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:58:53 compute-0 nova_compute[240119]: 2026-01-23 18:58:53.799 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 18:58:53 compute-0 nova_compute[240119]: 2026-01-23 18:58:53.804 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 18:58:53 compute-0 nova_compute[240119]: 2026-01-23 18:58:53.822 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 18:58:53 compute-0 nova_compute[240119]: 2026-01-23 18:58:53.824 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 18:58:53 compute-0 nova_compute[240119]: 2026-01-23 18:58:53.824 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:58:53 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/510384743' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:58:53 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/42458625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:58:54 compute-0 podman[243939]: 2026-01-23 18:58:54.029051565 +0000 UTC m=+0.115196657 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 18:58:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:55 compute-0 ceph-mon[75097]: pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:58:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:56 compute-0 nova_compute[240119]: 2026-01-23 18:58:56.769 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:58:56 compute-0 nova_compute[240119]: 2026-01-23 18:58:56.770 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:58:56 compute-0 nova_compute[240119]: 2026-01-23 18:58:56.770 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:58:56 compute-0 nova_compute[240119]: 2026-01-23 18:58:56.770 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:58:56 compute-0 nova_compute[240119]: 2026-01-23 18:58:56.770 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:58:56 compute-0 nova_compute[240119]: 2026-01-23 18:58:56.770 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:58:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 18:58:57 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1594178211' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 18:58:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 18:58:57 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1594178211' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 18:58:57 compute-0 nova_compute[240119]: 2026-01-23 18:58:57.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:58:57 compute-0 nova_compute[240119]: 2026-01-23 18:58:57.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 18:58:57 compute-0 nova_compute[240119]: 2026-01-23 18:58:57.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 18:58:57 compute-0 nova_compute[240119]: 2026-01-23 18:58:57.363 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 18:58:57 compute-0 nova_compute[240119]: 2026-01-23 18:58:57.364 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:58:57 compute-0 nova_compute[240119]: 2026-01-23 18:58:57.364 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 18:58:57 compute-0 ceph-mon[75097]: pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:58:58 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1594178211' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 18:58:58 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1594178211' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 18:58:58 compute-0 ceph-mon[75097]: pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:59:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:59:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:59:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:59:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:59:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:59:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:59:01 compute-0 ceph-mon[75097]: pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:02 compute-0 ceph-mon[75097]: pgmap v845: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:03 compute-0 podman[243965]: 2026-01-23 18:59:03.036832051 +0000 UTC m=+0.091484476 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 23 18:59:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:04 compute-0 ceph-mon[75097]: pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:59:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:07 compute-0 ceph-mon[75097]: pgmap v847: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:09 compute-0 ceph-mon[75097]: pgmap v848: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:10 compute-0 ceph-mon[75097]: pgmap v849: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:59:11 compute-0 sudo[243984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:59:11 compute-0 sudo[243984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:59:11 compute-0 sudo[243984]: pam_unix(sudo:session): session closed for user root
Jan 23 18:59:11 compute-0 sudo[244009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 18:59:11 compute-0 sudo[244009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:59:12 compute-0 sudo[244009]: pam_unix(sudo:session): session closed for user root
Jan 23 18:59:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:59:12 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:59:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 18:59:12 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:59:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 18:59:12 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:59:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 18:59:12 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:59:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 18:59:12 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:59:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 18:59:12 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:59:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:59:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 18:59:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:59:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 18:59:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 18:59:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 18:59:12 compute-0 sudo[244066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:59:12 compute-0 sudo[244066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:59:12 compute-0 sudo[244066]: pam_unix(sudo:session): session closed for user root
Jan 23 18:59:12 compute-0 sudo[244091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 18:59:12 compute-0 sudo[244091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:59:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:12 compute-0 podman[244128]: 2026-01-23 18:59:12.57065019 +0000 UTC m=+0.040886830 container create 59afe3c3485b9c61a874ce5deee2bc103456a69c9bf5e074cbd40c84b44aadcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_einstein, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:59:12 compute-0 systemd[1]: Started libpod-conmon-59afe3c3485b9c61a874ce5deee2bc103456a69c9bf5e074cbd40c84b44aadcb.scope.
Jan 23 18:59:12 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:59:12 compute-0 podman[244128]: 2026-01-23 18:59:12.646551503 +0000 UTC m=+0.116788163 container init 59afe3c3485b9c61a874ce5deee2bc103456a69c9bf5e074cbd40c84b44aadcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_einstein, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 23 18:59:12 compute-0 podman[244128]: 2026-01-23 18:59:12.551763338 +0000 UTC m=+0.021999998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:59:12 compute-0 podman[244128]: 2026-01-23 18:59:12.654659391 +0000 UTC m=+0.124896041 container start 59afe3c3485b9c61a874ce5deee2bc103456a69c9bf5e074cbd40c84b44aadcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:59:12 compute-0 naughty_einstein[244144]: 167 167
Jan 23 18:59:12 compute-0 podman[244128]: 2026-01-23 18:59:12.660229117 +0000 UTC m=+0.130465787 container attach 59afe3c3485b9c61a874ce5deee2bc103456a69c9bf5e074cbd40c84b44aadcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 18:59:12 compute-0 systemd[1]: libpod-59afe3c3485b9c61a874ce5deee2bc103456a69c9bf5e074cbd40c84b44aadcb.scope: Deactivated successfully.
Jan 23 18:59:12 compute-0 podman[244128]: 2026-01-23 18:59:12.661305424 +0000 UTC m=+0.131542064 container died 59afe3c3485b9c61a874ce5deee2bc103456a69c9bf5e074cbd40c84b44aadcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 23 18:59:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-572732ceb01c6da42e868f114de2c39791afc7e9045d8e35b5966c948350257d-merged.mount: Deactivated successfully.
Jan 23 18:59:12 compute-0 podman[244128]: 2026-01-23 18:59:12.708881714 +0000 UTC m=+0.179118354 container remove 59afe3c3485b9c61a874ce5deee2bc103456a69c9bf5e074cbd40c84b44aadcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 23 18:59:12 compute-0 systemd[1]: libpod-conmon-59afe3c3485b9c61a874ce5deee2bc103456a69c9bf5e074cbd40c84b44aadcb.scope: Deactivated successfully.
Jan 23 18:59:12 compute-0 podman[244169]: 2026-01-23 18:59:12.875298268 +0000 UTC m=+0.049260924 container create 546f323a17596180741194255411c46e7e64b0b41268b77c43f6f25c624f35f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feistel, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 23 18:59:12 compute-0 systemd[1]: Started libpod-conmon-546f323a17596180741194255411c46e7e64b0b41268b77c43f6f25c624f35f3.scope.
Jan 23 18:59:12 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b774f7be97acb0dd6bf0acea7a886eb5be8c29867a1e3523b8a41fd164506957/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b774f7be97acb0dd6bf0acea7a886eb5be8c29867a1e3523b8a41fd164506957/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b774f7be97acb0dd6bf0acea7a886eb5be8c29867a1e3523b8a41fd164506957/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b774f7be97acb0dd6bf0acea7a886eb5be8c29867a1e3523b8a41fd164506957/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b774f7be97acb0dd6bf0acea7a886eb5be8c29867a1e3523b8a41fd164506957/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 18:59:12 compute-0 podman[244169]: 2026-01-23 18:59:12.945531324 +0000 UTC m=+0.119494000 container init 546f323a17596180741194255411c46e7e64b0b41268b77c43f6f25c624f35f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feistel, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 18:59:12 compute-0 podman[244169]: 2026-01-23 18:59:12.854628744 +0000 UTC m=+0.028591430 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:59:12 compute-0 podman[244169]: 2026-01-23 18:59:12.952933474 +0000 UTC m=+0.126896130 container start 546f323a17596180741194255411c46e7e64b0b41268b77c43f6f25c624f35f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:59:12 compute-0 podman[244169]: 2026-01-23 18:59:12.956650614 +0000 UTC m=+0.130613290 container attach 546f323a17596180741194255411c46e7e64b0b41268b77c43f6f25c624f35f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 23 18:59:13 compute-0 ceph-mon[75097]: pgmap v850: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:13 compute-0 upbeat_feistel[244186]: --> passed data devices: 0 physical, 3 LVM
Jan 23 18:59:13 compute-0 upbeat_feistel[244186]: --> All data devices are unavailable
Jan 23 18:59:13 compute-0 systemd[1]: libpod-546f323a17596180741194255411c46e7e64b0b41268b77c43f6f25c624f35f3.scope: Deactivated successfully.
Jan 23 18:59:13 compute-0 podman[244169]: 2026-01-23 18:59:13.447434109 +0000 UTC m=+0.621396755 container died 546f323a17596180741194255411c46e7e64b0b41268b77c43f6f25c624f35f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 18:59:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b774f7be97acb0dd6bf0acea7a886eb5be8c29867a1e3523b8a41fd164506957-merged.mount: Deactivated successfully.
Jan 23 18:59:13 compute-0 podman[244169]: 2026-01-23 18:59:13.493625947 +0000 UTC m=+0.667588613 container remove 546f323a17596180741194255411c46e7e64b0b41268b77c43f6f25c624f35f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:59:13 compute-0 systemd[1]: libpod-conmon-546f323a17596180741194255411c46e7e64b0b41268b77c43f6f25c624f35f3.scope: Deactivated successfully.
Jan 23 18:59:13 compute-0 sudo[244091]: pam_unix(sudo:session): session closed for user root
Jan 23 18:59:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:59:13.581 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:59:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:59:13.583 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:59:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:59:13.584 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:59:13 compute-0 sudo[244217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:59:13 compute-0 sudo[244217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:59:13 compute-0 sudo[244217]: pam_unix(sudo:session): session closed for user root
Jan 23 18:59:13 compute-0 sudo[244242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 18:59:13 compute-0 sudo[244242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:59:13 compute-0 podman[244278]: 2026-01-23 18:59:13.922036498 +0000 UTC m=+0.041709429 container create d1bd22adc41175a9dde3c48f9b224120af6c50965aa23eff309476dc1c803a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:59:13 compute-0 systemd[1]: Started libpod-conmon-d1bd22adc41175a9dde3c48f9b224120af6c50965aa23eff309476dc1c803a51.scope.
Jan 23 18:59:13 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:59:13 compute-0 podman[244278]: 2026-01-23 18:59:13.90451052 +0000 UTC m=+0.024183481 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:59:14 compute-0 podman[244278]: 2026-01-23 18:59:14.012782304 +0000 UTC m=+0.132455255 container init d1bd22adc41175a9dde3c48f9b224120af6c50965aa23eff309476dc1c803a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_rubin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:59:14 compute-0 podman[244278]: 2026-01-23 18:59:14.018353061 +0000 UTC m=+0.138025992 container start d1bd22adc41175a9dde3c48f9b224120af6c50965aa23eff309476dc1c803a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_rubin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 23 18:59:14 compute-0 podman[244278]: 2026-01-23 18:59:14.02163325 +0000 UTC m=+0.141306181 container attach d1bd22adc41175a9dde3c48f9b224120af6c50965aa23eff309476dc1c803a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_rubin, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:59:14 compute-0 gallant_rubin[244294]: 167 167
Jan 23 18:59:14 compute-0 systemd[1]: libpod-d1bd22adc41175a9dde3c48f9b224120af6c50965aa23eff309476dc1c803a51.scope: Deactivated successfully.
Jan 23 18:59:14 compute-0 podman[244278]: 2026-01-23 18:59:14.022660046 +0000 UTC m=+0.142332977 container died d1bd22adc41175a9dde3c48f9b224120af6c50965aa23eff309476dc1c803a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:59:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4917106650ee193f33d308b76a15df465a345eddd9e171142af04eaf339249c3-merged.mount: Deactivated successfully.
Jan 23 18:59:14 compute-0 podman[244278]: 2026-01-23 18:59:14.059611768 +0000 UTC m=+0.179284699 container remove d1bd22adc41175a9dde3c48f9b224120af6c50965aa23eff309476dc1c803a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_rubin, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 18:59:14 compute-0 systemd[1]: libpod-conmon-d1bd22adc41175a9dde3c48f9b224120af6c50965aa23eff309476dc1c803a51.scope: Deactivated successfully.
Jan 23 18:59:14 compute-0 podman[244315]: 2026-01-23 18:59:14.22390277 +0000 UTC m=+0.044933839 container create 33d294f958e5ba2c644f518d99c63c975c08dba69a6ca212266ddab0a0a9960c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 18:59:14 compute-0 systemd[1]: Started libpod-conmon-33d294f958e5ba2c644f518d99c63c975c08dba69a6ca212266ddab0a0a9960c.scope.
Jan 23 18:59:14 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5a2b048d498f29765e62cbecf4ee6447fecc5766d656e66e80579f0fbf96f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5a2b048d498f29765e62cbecf4ee6447fecc5766d656e66e80579f0fbf96f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5a2b048d498f29765e62cbecf4ee6447fecc5766d656e66e80579f0fbf96f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5a2b048d498f29765e62cbecf4ee6447fecc5766d656e66e80579f0fbf96f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:59:14 compute-0 podman[244315]: 2026-01-23 18:59:14.298828979 +0000 UTC m=+0.119860028 container init 33d294f958e5ba2c644f518d99c63c975c08dba69a6ca212266ddab0a0a9960c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 23 18:59:14 compute-0 podman[244315]: 2026-01-23 18:59:14.206446363 +0000 UTC m=+0.027477422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:59:14 compute-0 podman[244315]: 2026-01-23 18:59:14.306780044 +0000 UTC m=+0.127811073 container start 33d294f958e5ba2c644f518d99c63c975c08dba69a6ca212266ddab0a0a9960c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 18:59:14 compute-0 podman[244315]: 2026-01-23 18:59:14.311607871 +0000 UTC m=+0.132638930 container attach 33d294f958e5ba2c644f518d99c63c975c08dba69a6ca212266ddab0a0a9960c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_haslett, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 23 18:59:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:14 compute-0 jolly_haslett[244332]: {
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:     "0": [
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:         {
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "devices": [
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "/dev/loop3"
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             ],
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "lv_name": "ceph_lv0",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "lv_size": "21470642176",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "name": "ceph_lv0",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "tags": {
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.cluster_name": "ceph",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.crush_device_class": "",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.encrypted": "0",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.objectstore": "bluestore",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.osd_id": "0",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.type": "block",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.vdo": "0",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.with_tpm": "0"
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             },
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "type": "block",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "vg_name": "ceph_vg0"
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:         }
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:     ],
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:     "1": [
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:         {
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "devices": [
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "/dev/loop4"
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             ],
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "lv_name": "ceph_lv1",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "lv_size": "21470642176",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "name": "ceph_lv1",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "tags": {
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.cluster_name": "ceph",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.crush_device_class": "",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.encrypted": "0",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.objectstore": "bluestore",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.osd_id": "1",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.type": "block",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.vdo": "0",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.with_tpm": "0"
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             },
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "type": "block",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "vg_name": "ceph_vg1"
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:         }
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:     ],
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:     "2": [
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:         {
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "devices": [
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "/dev/loop5"
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             ],
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "lv_name": "ceph_lv2",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "lv_size": "21470642176",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "name": "ceph_lv2",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "tags": {
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.cluster_name": "ceph",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.crush_device_class": "",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.encrypted": "0",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.objectstore": "bluestore",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.osd_id": "2",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.type": "block",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.vdo": "0",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:                 "ceph.with_tpm": "0"
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             },
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "type": "block",
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:             "vg_name": "ceph_vg2"
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:         }
Jan 23 18:59:14 compute-0 jolly_haslett[244332]:     ]
Jan 23 18:59:14 compute-0 jolly_haslett[244332]: }
Jan 23 18:59:14 compute-0 systemd[1]: libpod-33d294f958e5ba2c644f518d99c63c975c08dba69a6ca212266ddab0a0a9960c.scope: Deactivated successfully.
Jan 23 18:59:14 compute-0 podman[244315]: 2026-01-23 18:59:14.606548813 +0000 UTC m=+0.427579862 container died 33d294f958e5ba2c644f518d99c63c975c08dba69a6ca212266ddab0a0a9960c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_haslett, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 23 18:59:15 compute-0 ceph-mon[75097]: pgmap v851: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c5a2b048d498f29765e62cbecf4ee6447fecc5766d656e66e80579f0fbf96f7-merged.mount: Deactivated successfully.
Jan 23 18:59:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:59:16 compute-0 podman[244315]: 2026-01-23 18:59:16.068704396 +0000 UTC m=+1.889735425 container remove 33d294f958e5ba2c644f518d99c63c975c08dba69a6ca212266ddab0a0a9960c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_haslett, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 23 18:59:16 compute-0 sudo[244242]: pam_unix(sudo:session): session closed for user root
Jan 23 18:59:16 compute-0 systemd[1]: libpod-conmon-33d294f958e5ba2c644f518d99c63c975c08dba69a6ca212266ddab0a0a9960c.scope: Deactivated successfully.
Jan 23 18:59:16 compute-0 sudo[244352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 18:59:16 compute-0 sudo[244352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:59:16 compute-0 sudo[244352]: pam_unix(sudo:session): session closed for user root
Jan 23 18:59:16 compute-0 sudo[244377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 18:59:16 compute-0 sudo[244377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:59:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:16 compute-0 podman[244414]: 2026-01-23 18:59:16.524487236 +0000 UTC m=+0.056376658 container create 2522a07ca5d0aac7529b78234c80aaccf95c9c6434bf86842a2f1a5ccc82dfde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_chandrasekhar, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:59:16 compute-0 systemd[1]: Started libpod-conmon-2522a07ca5d0aac7529b78234c80aaccf95c9c6434bf86842a2f1a5ccc82dfde.scope.
Jan 23 18:59:16 compute-0 ceph-mon[75097]: pgmap v852: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:16 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:59:16 compute-0 podman[244414]: 2026-01-23 18:59:16.490727272 +0000 UTC m=+0.022616714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:59:16 compute-0 podman[244414]: 2026-01-23 18:59:16.842227845 +0000 UTC m=+0.374117267 container init 2522a07ca5d0aac7529b78234c80aaccf95c9c6434bf86842a2f1a5ccc82dfde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 18:59:16 compute-0 podman[244414]: 2026-01-23 18:59:16.850898036 +0000 UTC m=+0.382787458 container start 2522a07ca5d0aac7529b78234c80aaccf95c9c6434bf86842a2f1a5ccc82dfde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_chandrasekhar, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:59:16 compute-0 frosty_chandrasekhar[244430]: 167 167
Jan 23 18:59:16 compute-0 podman[244414]: 2026-01-23 18:59:16.856167986 +0000 UTC m=+0.388057438 container attach 2522a07ca5d0aac7529b78234c80aaccf95c9c6434bf86842a2f1a5ccc82dfde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 18:59:16 compute-0 systemd[1]: libpod-2522a07ca5d0aac7529b78234c80aaccf95c9c6434bf86842a2f1a5ccc82dfde.scope: Deactivated successfully.
Jan 23 18:59:16 compute-0 podman[244414]: 2026-01-23 18:59:16.857130778 +0000 UTC m=+0.389020240 container died 2522a07ca5d0aac7529b78234c80aaccf95c9c6434bf86842a2f1a5ccc82dfde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 23 18:59:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4814994e6a47a00b544b9472a731a214f9f9f1764c779266eff981eaf3e5cad-merged.mount: Deactivated successfully.
Jan 23 18:59:16 compute-0 podman[244414]: 2026-01-23 18:59:16.901784249 +0000 UTC m=+0.433673671 container remove 2522a07ca5d0aac7529b78234c80aaccf95c9c6434bf86842a2f1a5ccc82dfde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_chandrasekhar, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 18:59:16 compute-0 systemd[1]: libpod-conmon-2522a07ca5d0aac7529b78234c80aaccf95c9c6434bf86842a2f1a5ccc82dfde.scope: Deactivated successfully.
Jan 23 18:59:17 compute-0 podman[244454]: 2026-01-23 18:59:17.04760626 +0000 UTC m=+0.022824959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 18:59:17 compute-0 podman[244454]: 2026-01-23 18:59:17.389507998 +0000 UTC m=+0.364726677 container create c724e1b7e16e3309ae4d7ee6cb8d397074962489fbe0ec9770a4eb9069c02195 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_engelbart, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 18:59:17 compute-0 systemd[1]: Started libpod-conmon-c724e1b7e16e3309ae4d7ee6cb8d397074962489fbe0ec9770a4eb9069c02195.scope.
Jan 23 18:59:17 compute-0 systemd[1]: Started libcrun container.
Jan 23 18:59:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a9fbe2db60013486ab3f32e24615d88034afa392df100dea523cf1cf21bdc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 18:59:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a9fbe2db60013486ab3f32e24615d88034afa392df100dea523cf1cf21bdc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 18:59:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a9fbe2db60013486ab3f32e24615d88034afa392df100dea523cf1cf21bdc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 18:59:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a9fbe2db60013486ab3f32e24615d88034afa392df100dea523cf1cf21bdc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 18:59:17 compute-0 podman[244454]: 2026-01-23 18:59:17.476955854 +0000 UTC m=+0.452174553 container init c724e1b7e16e3309ae4d7ee6cb8d397074962489fbe0ec9770a4eb9069c02195 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_engelbart, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 18:59:17 compute-0 podman[244454]: 2026-01-23 18:59:17.485235286 +0000 UTC m=+0.460453985 container start c724e1b7e16e3309ae4d7ee6cb8d397074962489fbe0ec9770a4eb9069c02195 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_engelbart, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 23 18:59:17 compute-0 podman[244454]: 2026-01-23 18:59:17.489283765 +0000 UTC m=+0.464502454 container attach c724e1b7e16e3309ae4d7ee6cb8d397074962489fbe0ec9770a4eb9069c02195 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 23 18:59:18 compute-0 lvm[244548]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 18:59:18 compute-0 lvm[244549]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 18:59:18 compute-0 lvm[244548]: VG ceph_vg0 finished
Jan 23 18:59:18 compute-0 lvm[244549]: VG ceph_vg1 finished
Jan 23 18:59:18 compute-0 lvm[244551]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 18:59:18 compute-0 lvm[244551]: VG ceph_vg2 finished
Jan 23 18:59:18 compute-0 friendly_engelbart[244470]: {}
Jan 23 18:59:18 compute-0 systemd[1]: libpod-c724e1b7e16e3309ae4d7ee6cb8d397074962489fbe0ec9770a4eb9069c02195.scope: Deactivated successfully.
Jan 23 18:59:18 compute-0 systemd[1]: libpod-c724e1b7e16e3309ae4d7ee6cb8d397074962489fbe0ec9770a4eb9069c02195.scope: Consumed 1.277s CPU time.
Jan 23 18:59:18 compute-0 podman[244454]: 2026-01-23 18:59:18.29299026 +0000 UTC m=+1.268208959 container died c724e1b7e16e3309ae4d7ee6cb8d397074962489fbe0ec9770a4eb9069c02195 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Jan 23 18:59:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-73a9fbe2db60013486ab3f32e24615d88034afa392df100dea523cf1cf21bdc5-merged.mount: Deactivated successfully.
Jan 23 18:59:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:18 compute-0 podman[244454]: 2026-01-23 18:59:18.482940518 +0000 UTC m=+1.458159197 container remove c724e1b7e16e3309ae4d7ee6cb8d397074962489fbe0ec9770a4eb9069c02195 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 18:59:18 compute-0 systemd[1]: libpod-conmon-c724e1b7e16e3309ae4d7ee6cb8d397074962489fbe0ec9770a4eb9069c02195.scope: Deactivated successfully.
Jan 23 18:59:18 compute-0 sudo[244377]: pam_unix(sudo:session): session closed for user root
Jan 23 18:59:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 18:59:18 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:59:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 18:59:18 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:59:18 compute-0 ceph-mon[75097]: pgmap v853: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:59:18 compute-0 sudo[244565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 18:59:18 compute-0 sudo[244565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 18:59:18 compute-0 sudo[244565]: pam_unix(sudo:session): session closed for user root
Jan 23 18:59:19 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 18:59:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:20 compute-0 ceph-mon[75097]: pgmap v854: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:59:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:22 compute-0 ceph-mon[75097]: pgmap v855: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:24 compute-0 ceph-mon[75097]: pgmap v856: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:25 compute-0 podman[244591]: 2026-01-23 18:59:25.009905397 +0000 UTC m=+0.084590046 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 18:59:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:59:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:26 compute-0 ceph-mon[75097]: pgmap v857: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_18:59:28
Jan 23 18:59:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 18:59:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 18:59:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'volumes', 'default.rgw.log', 'vms', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', '.mgr']
Jan 23 18:59:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 18:59:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 23 18:59:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 23 18:59:29 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 23 18:59:29 compute-0 ceph-mon[75097]: pgmap v858: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:59:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:59:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:59:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:59:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 18:59:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 18:59:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 23 18:59:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 23 18:59:30 compute-0 ceph-mon[75097]: osdmap e119: 3 total, 3 up, 3 in
Jan 23 18:59:30 compute-0 ceph-mon[75097]: pgmap v860: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:30 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 23 18:59:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:59:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 18:59:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:59:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 18:59:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 18:59:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:59:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 18:59:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:59:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:59:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 18:59:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 18:59:31 compute-0 ceph-mon[75097]: osdmap e120: 3 total, 3 up, 3 in
Jan 23 18:59:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 23 18:59:32 compute-0 ceph-mon[75097]: pgmap v862: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 23 18:59:32 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 23 18:59:33 compute-0 ceph-mon[75097]: osdmap e121: 3 total, 3 up, 3 in
Jan 23 18:59:33 compute-0 podman[244617]: 2026-01-23 18:59:33.972315363 +0000 UTC m=+0.055325471 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 23 18:59:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 37 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 6.1 MiB/s wr, 53 op/s
Jan 23 18:59:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 23 18:59:34 compute-0 ceph-mon[75097]: pgmap v864: 305 pgs: 305 active+clean; 37 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 6.1 MiB/s wr, 53 op/s
Jan 23 18:59:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 23 18:59:34 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 23 18:59:35 compute-0 ceph-mon[75097]: osdmap e122: 3 total, 3 up, 3 in
Jan 23 18:59:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:59:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 41 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 6.8 MiB/s wr, 53 op/s
Jan 23 18:59:36 compute-0 ceph-mon[75097]: pgmap v866: 305 pgs: 305 active+clean; 41 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 6.8 MiB/s wr, 53 op/s
Jan 23 18:59:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 41 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 5.2 MiB/s wr, 40 op/s
Jan 23 18:59:38 compute-0 ceph-mon[75097]: pgmap v867: 305 pgs: 305 active+clean; 41 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 5.2 MiB/s wr, 40 op/s
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665824798476675 of space, bias 1.0, pg target 0.1997474395430025 quantized to 32 (current 32)
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5476493072857527e-06 of space, bias 4.0, pg target 0.003057179168742903 quantized to 16 (current 16)
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 18:59:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 23 18:59:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:59:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 23 18:59:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 23 18:59:40 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 23 18:59:41 compute-0 ceph-mon[75097]: pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 23 18:59:41 compute-0 ceph-mon[75097]: osdmap e123: 3 total, 3 up, 3 in
Jan 23 18:59:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 569 KiB/s wr, 7 op/s
Jan 23 18:59:42 compute-0 ceph-mon[75097]: pgmap v870: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 569 KiB/s wr, 7 op/s
Jan 23 18:59:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 847 B/s wr, 6 op/s
Jan 23 18:59:45 compute-0 ceph-mon[75097]: pgmap v871: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 847 B/s wr, 6 op/s
Jan 23 18:59:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:59:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 818 B/s wr, 5 op/s
Jan 23 18:59:46 compute-0 ceph-mon[75097]: pgmap v872: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 818 B/s wr, 5 op/s
Jan 23 18:59:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 818 B/s wr, 5 op/s
Jan 23 18:59:49 compute-0 ceph-mon[75097]: pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 818 B/s wr, 5 op/s
Jan 23 18:59:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:59:51 compute-0 ceph-mon[75097]: pgmap v874: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:52 compute-0 nova_compute[240119]: 2026-01-23 18:59:52.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:59:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:52 compute-0 ceph-mon[75097]: pgmap v875: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 23 18:59:53 compute-0 nova_compute[240119]: 2026-01-23 18:59:53.178 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:59:53 compute-0 nova_compute[240119]: 2026-01-23 18:59:53.178 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:59:53 compute-0 nova_compute[240119]: 2026-01-23 18:59:53.178 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:59:53 compute-0 nova_compute[240119]: 2026-01-23 18:59:53.178 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 18:59:53 compute-0 nova_compute[240119]: 2026-01-23 18:59:53.179 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 18:59:53 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 18:59:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 18:59:53 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 18:59:53 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T18:59:53.688+0000 7f06a7d20640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 18:59:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 18:59:53 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1563519655' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:59:53 compute-0 nova_compute[240119]: 2026-01-23 18:59:53.749 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 18:59:53 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1563519655' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:59:53 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/96bd1521-7f0a-4814-8a3f-4ba43ad77cc6'.
Jan 23 18:59:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp'
Jan 23 18:59:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp' to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta'
Jan 23 18:59:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 18:59:53 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "format": "json"}]: dispatch
Jan 23 18:59:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 18:59:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 18:59:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 18:59:53 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 18:59:53 compute-0 nova_compute[240119]: 2026-01-23 18:59:53.914 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 18:59:53 compute-0 nova_compute[240119]: 2026-01-23 18:59:53.915 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5170MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 18:59:53 compute-0 nova_compute[240119]: 2026-01-23 18:59:53.916 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 18:59:53 compute-0 nova_compute[240119]: 2026-01-23 18:59:53.916 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 18:59:54 compute-0 nova_compute[240119]: 2026-01-23 18:59:54.230 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 18:59:54 compute-0 nova_compute[240119]: 2026-01-23 18:59:54.230 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 18:59:54 compute-0 nova_compute[240119]: 2026-01-23 18:59:54.383 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Refreshing inventories for resource provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 18:59:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 23 18:59:54 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 18:59:54 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "format": "json"}]: dispatch
Jan 23 18:59:54 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 18:59:54 compute-0 ceph-mon[75097]: pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 23 18:59:55 compute-0 nova_compute[240119]: 2026-01-23 18:59:55.302 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Updating ProviderTree inventory for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 18:59:55 compute-0 nova_compute[240119]: 2026-01-23 18:59:55.302 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Updating inventory in ProviderTree for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 18:59:55 compute-0 nova_compute[240119]: 2026-01-23 18:59:55.323 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Refreshing aggregate associations for resource provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 18:59:55 compute-0 nova_compute[240119]: 2026-01-23 18:59:55.347 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Refreshing trait associations for resource provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE42,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 18:59:55 compute-0 nova_compute[240119]: 2026-01-23 18:59:55.392 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 18:59:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 18:59:55 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.rszfwe(active, since 25m)
Jan 23 18:59:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 18:59:55 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/531867978' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:59:55 compute-0 nova_compute[240119]: 2026-01-23 18:59:55.989 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 18:59:55 compute-0 nova_compute[240119]: 2026-01-23 18:59:55.996 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 18:59:56 compute-0 nova_compute[240119]: 2026-01-23 18:59:56.016 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 18:59:56 compute-0 nova_compute[240119]: 2026-01-23 18:59:56.017 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 18:59:56 compute-0 nova_compute[240119]: 2026-01-23 18:59:56.018 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 18:59:56 compute-0 podman[244694]: 2026-01-23 18:59:56.019490044 +0000 UTC m=+0.095596986 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 18:59:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 23 18:59:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f7fdf74a-3877-444d-a8d2-806e64741582", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 18:59:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f7fdf74a-3877-444d-a8d2-806e64741582, vol_name:cephfs) < ""
Jan 23 18:59:56 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f7fdf74a-3877-444d-a8d2-806e64741582/564694cf-df71-4f18-9d9f-cc11342eec83'.
Jan 23 18:59:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f7fdf74a-3877-444d-a8d2-806e64741582/.meta.tmp'
Jan 23 18:59:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f7fdf74a-3877-444d-a8d2-806e64741582/.meta.tmp' to config b'/volumes/_nogroup/f7fdf74a-3877-444d-a8d2-806e64741582/.meta'
Jan 23 18:59:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f7fdf74a-3877-444d-a8d2-806e64741582, vol_name:cephfs) < ""
Jan 23 18:59:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f7fdf74a-3877-444d-a8d2-806e64741582", "format": "json"}]: dispatch
Jan 23 18:59:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f7fdf74a-3877-444d-a8d2-806e64741582, vol_name:cephfs) < ""
Jan 23 18:59:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f7fdf74a-3877-444d-a8d2-806e64741582, vol_name:cephfs) < ""
Jan 23 18:59:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 18:59:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 18:59:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 18:59:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2139230004' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 18:59:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 18:59:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2139230004' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 18:59:56 compute-0 ceph-mon[75097]: mgrmap e10: compute-0.rszfwe(active, since 25m)
Jan 23 18:59:56 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/531867978' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 18:59:56 compute-0 ceph-mon[75097]: pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 23 18:59:56 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 18:59:56 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/2139230004' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 18:59:56 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/2139230004' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 18:59:57 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a0a8215b-eedb-45d9-a08e-7e13600761d5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 18:59:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a0a8215b-eedb-45d9-a08e-7e13600761d5, vol_name:cephfs) < ""
Jan 23 18:59:57 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a0a8215b-eedb-45d9-a08e-7e13600761d5/58c64058-42c9-4d10-bcaa-c172aded3cf6'.
Jan 23 18:59:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a0a8215b-eedb-45d9-a08e-7e13600761d5/.meta.tmp'
Jan 23 18:59:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a0a8215b-eedb-45d9-a08e-7e13600761d5/.meta.tmp' to config b'/volumes/_nogroup/a0a8215b-eedb-45d9-a08e-7e13600761d5/.meta'
Jan 23 18:59:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a0a8215b-eedb-45d9-a08e-7e13600761d5, vol_name:cephfs) < ""
Jan 23 18:59:57 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a0a8215b-eedb-45d9-a08e-7e13600761d5", "format": "json"}]: dispatch
Jan 23 18:59:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a0a8215b-eedb-45d9-a08e-7e13600761d5, vol_name:cephfs) < ""
Jan 23 18:59:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a0a8215b-eedb-45d9-a08e-7e13600761d5, vol_name:cephfs) < ""
Jan 23 18:59:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 18:59:57 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 18:59:57 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c9f4e008-d833-4462-9cc2-c6b75c41215b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 18:59:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c9f4e008-d833-4462-9cc2-c6b75c41215b, vol_name:cephfs) < ""
Jan 23 18:59:57 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c9f4e008-d833-4462-9cc2-c6b75c41215b/98ac6f0c-3dab-4248-b0bb-2aa3d9aa4d2e'.
Jan 23 18:59:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c9f4e008-d833-4462-9cc2-c6b75c41215b/.meta.tmp'
Jan 23 18:59:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c9f4e008-d833-4462-9cc2-c6b75c41215b/.meta.tmp' to config b'/volumes/_nogroup/c9f4e008-d833-4462-9cc2-c6b75c41215b/.meta'
Jan 23 18:59:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c9f4e008-d833-4462-9cc2-c6b75c41215b, vol_name:cephfs) < ""
Jan 23 18:59:57 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c9f4e008-d833-4462-9cc2-c6b75c41215b", "format": "json"}]: dispatch
Jan 23 18:59:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c9f4e008-d833-4462-9cc2-c6b75c41215b, vol_name:cephfs) < ""
Jan 23 18:59:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c9f4e008-d833-4462-9cc2-c6b75c41215b, vol_name:cephfs) < ""
Jan 23 18:59:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 18:59:57 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 18:59:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f7fdf74a-3877-444d-a8d2-806e64741582", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 18:59:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f7fdf74a-3877-444d-a8d2-806e64741582", "format": "json"}]: dispatch
Jan 23 18:59:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a0a8215b-eedb-45d9-a08e-7e13600761d5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 18:59:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a0a8215b-eedb-45d9-a08e-7e13600761d5", "format": "json"}]: dispatch
Jan 23 18:59:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 18:59:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c9f4e008-d833-4462-9cc2-c6b75c41215b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 18:59:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c9f4e008-d833-4462-9cc2-c6b75c41215b", "format": "json"}]: dispatch
Jan 23 18:59:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 18:59:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 23 18:59:58 compute-0 ceph-mon[75097]: pgmap v878: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 23 18:59:59 compute-0 nova_compute[240119]: 2026-01-23 18:59:59.013 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:59:59 compute-0 nova_compute[240119]: 2026-01-23 18:59:59.014 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:59:59 compute-0 nova_compute[240119]: 2026-01-23 18:59:59.036 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:59:59 compute-0 nova_compute[240119]: 2026-01-23 18:59:59.036 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 18:59:59 compute-0 nova_compute[240119]: 2026-01-23 18:59:59.036 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 18:59:59 compute-0 nova_compute[240119]: 2026-01-23 18:59:59.060 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 18:59:59 compute-0 nova_compute[240119]: 2026-01-23 18:59:59.060 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:59:59 compute-0 nova_compute[240119]: 2026-01-23 18:59:59.061 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:59:59 compute-0 nova_compute[240119]: 2026-01-23 18:59:59.061 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:59:59 compute-0 nova_compute[240119]: 2026-01-23 18:59:59.061 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:59:59 compute-0 nova_compute[240119]: 2026-01-23 18:59:59.061 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:59:59 compute-0 nova_compute[240119]: 2026-01-23 18:59:59.061 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 18:59:59 compute-0 nova_compute[240119]: 2026-01-23 18:59:59.061 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 18:59:59 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:59:59.484 155616 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:3d:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:eb:2e:86:1a:4d'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 18:59:59 compute-0 ovn_metadata_agent[155611]: 2026-01-23 18:59:59.485 155616 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 19:00:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:00:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:00:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:00:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:00:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:00:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:00:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 2 op/s
Jan 23 19:00:00 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "c9f4e008-d833-4462-9cc2-c6b75c41215b", "new_size": 2147483648, "format": "json"}]: dispatch
Jan 23 19:00:00 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:c9f4e008-d833-4462-9cc2-c6b75c41215b, vol_name:cephfs) < ""
Jan 23 19:00:00 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:c9f4e008-d833-4462-9cc2-c6b75c41215b, vol_name:cephfs) < ""
Jan 23 19:00:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 19:00:00 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a0a8215b-eedb-45d9-a08e-7e13600761d5", "snap_name": "d0fab392-1de9-468f-8ddd-0e8673bf9cea", "format": "json"}]: dispatch
Jan 23 19:00:00 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d0fab392-1de9-468f-8ddd-0e8673bf9cea, sub_name:a0a8215b-eedb-45d9-a08e-7e13600761d5, vol_name:cephfs) < ""
Jan 23 19:00:00 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d0fab392-1de9-468f-8ddd-0e8673bf9cea, sub_name:a0a8215b-eedb-45d9-a08e-7e13600761d5, vol_name:cephfs) < ""
Jan 23 19:00:01 compute-0 ceph-mon[75097]: pgmap v879: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 2 op/s
Jan 23 19:00:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 2 op/s
Jan 23 19:00:03 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "c9f4e008-d833-4462-9cc2-c6b75c41215b", "new_size": 2147483648, "format": "json"}]: dispatch
Jan 23 19:00:03 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a0a8215b-eedb-45d9-a08e-7e13600761d5", "snap_name": "d0fab392-1de9-468f-8ddd-0e8673bf9cea", "format": "json"}]: dispatch
Jan 23 19:00:03 compute-0 ceph-mon[75097]: pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 2 op/s
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c9f4e008-d833-4462-9cc2-c6b75c41215b", "format": "json"}]: dispatch
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c9f4e008-d833-4462-9cc2-c6b75c41215b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c9f4e008-d833-4462-9cc2-c6b75c41215b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c9f4e008-d833-4462-9cc2-c6b75c41215b' of type subvolume
Jan 23 19:00:03 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:03.355+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c9f4e008-d833-4462-9cc2-c6b75c41215b' of type subvolume
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c9f4e008-d833-4462-9cc2-c6b75c41215b", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c9f4e008-d833-4462-9cc2-c6b75c41215b, vol_name:cephfs) < ""
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c9f4e008-d833-4462-9cc2-c6b75c41215b'' moved to trashcan
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c9f4e008-d833-4462-9cc2-c6b75c41215b, vol_name:cephfs) < ""
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:03.369+0000 7f06aa525640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:03.369+0000 7f06aa525640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:03.369+0000 7f06aa525640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:03.369+0000 7f06aa525640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:03.369+0000 7f06aa525640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:03.386+0000 7f06a9523640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:03.386+0000 7f06a9523640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:03.386+0000 7f06a9523640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:03.386+0000 7f06a9523640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:03 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:03.386+0000 7f06a9523640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:00:04 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.rszfwe(active, since 25m)
Jan 23 19:00:04 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c9f4e008-d833-4462-9cc2-c6b75c41215b", "format": "json"}]: dispatch
Jan 23 19:00:04 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c9f4e008-d833-4462-9cc2-c6b75c41215b", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 4 op/s
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a0a8215b-eedb-45d9-a08e-7e13600761d5", "snap_name": "d0fab392-1de9-468f-8ddd-0e8673bf9cea_6bcd36ca-343f-4a64-9ade-e69aae9fc755", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d0fab392-1de9-468f-8ddd-0e8673bf9cea_6bcd36ca-343f-4a64-9ade-e69aae9fc755, sub_name:a0a8215b-eedb-45d9-a08e-7e13600761d5, vol_name:cephfs) < ""
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a0a8215b-eedb-45d9-a08e-7e13600761d5/.meta.tmp'
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a0a8215b-eedb-45d9-a08e-7e13600761d5/.meta.tmp' to config b'/volumes/_nogroup/a0a8215b-eedb-45d9-a08e-7e13600761d5/.meta'
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d0fab392-1de9-468f-8ddd-0e8673bf9cea_6bcd36ca-343f-4a64-9ade-e69aae9fc755, sub_name:a0a8215b-eedb-45d9-a08e-7e13600761d5, vol_name:cephfs) < ""
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a0a8215b-eedb-45d9-a08e-7e13600761d5", "snap_name": "d0fab392-1de9-468f-8ddd-0e8673bf9cea", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d0fab392-1de9-468f-8ddd-0e8673bf9cea, sub_name:a0a8215b-eedb-45d9-a08e-7e13600761d5, vol_name:cephfs) < ""
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a0a8215b-eedb-45d9-a08e-7e13600761d5/.meta.tmp'
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a0a8215b-eedb-45d9-a08e-7e13600761d5/.meta.tmp' to config b'/volumes/_nogroup/a0a8215b-eedb-45d9-a08e-7e13600761d5/.meta'
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d0fab392-1de9-468f-8ddd-0e8673bf9cea, sub_name:a0a8215b-eedb-45d9-a08e-7e13600761d5, vol_name:cephfs) < ""
Jan 23 19:00:04 compute-0 podman[244746]: 2026-01-23 19:00:04.971624362 +0000 UTC m=+0.054931292 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f7fdf74a-3877-444d-a8d2-806e64741582", "format": "json"}]: dispatch
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f7fdf74a-3877-444d-a8d2-806e64741582, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f7fdf74a-3877-444d-a8d2-806e64741582, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:00:04 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:04.980+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f7fdf74a-3877-444d-a8d2-806e64741582' of type subvolume
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f7fdf74a-3877-444d-a8d2-806e64741582' of type subvolume
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f7fdf74a-3877-444d-a8d2-806e64741582", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f7fdf74a-3877-444d-a8d2-806e64741582, vol_name:cephfs) < ""
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f7fdf74a-3877-444d-a8d2-806e64741582'' moved to trashcan
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:00:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f7fdf74a-3877-444d-a8d2-806e64741582, vol_name:cephfs) < ""
Jan 23 19:00:05 compute-0 ceph-mon[75097]: mgrmap e11: compute-0.rszfwe(active, since 25m)
Jan 23 19:00:05 compute-0 ceph-mon[75097]: pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 4 op/s
Jan 23 19:00:05 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a8a8bc57-0cf3-40b2-af69-e9b0e686c548", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:00:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a8a8bc57-0cf3-40b2-af69-e9b0e686c548, vol_name:cephfs) < ""
Jan 23 19:00:05 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a8a8bc57-0cf3-40b2-af69-e9b0e686c548/ed0bd6c2-d3b2-4881-adac-ee4b7ce9441d'.
Jan 23 19:00:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a8a8bc57-0cf3-40b2-af69-e9b0e686c548/.meta.tmp'
Jan 23 19:00:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a8a8bc57-0cf3-40b2-af69-e9b0e686c548/.meta.tmp' to config b'/volumes/_nogroup/a8a8bc57-0cf3-40b2-af69-e9b0e686c548/.meta'
Jan 23 19:00:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a8a8bc57-0cf3-40b2-af69-e9b0e686c548, vol_name:cephfs) < ""
Jan 23 19:00:05 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a8a8bc57-0cf3-40b2-af69-e9b0e686c548", "format": "json"}]: dispatch
Jan 23 19:00:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a8a8bc57-0cf3-40b2-af69-e9b0e686c548, vol_name:cephfs) < ""
Jan 23 19:00:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a8a8bc57-0cf3-40b2-af69-e9b0e686c548, vol_name:cephfs) < ""
Jan 23 19:00:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:00:05 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:00:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 19:00:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 23 19:00:06 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a0a8215b-eedb-45d9-a08e-7e13600761d5", "snap_name": "d0fab392-1de9-468f-8ddd-0e8673bf9cea_6bcd36ca-343f-4a64-9ade-e69aae9fc755", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:06 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a0a8215b-eedb-45d9-a08e-7e13600761d5", "snap_name": "d0fab392-1de9-468f-8ddd-0e8673bf9cea", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:06 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f7fdf74a-3877-444d-a8d2-806e64741582", "format": "json"}]: dispatch
Jan 23 19:00:06 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f7fdf74a-3877-444d-a8d2-806e64741582", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:06 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a8a8bc57-0cf3-40b2-af69-e9b0e686c548", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:00:06 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a8a8bc57-0cf3-40b2-af69-e9b0e686c548", "format": "json"}]: dispatch
Jan 23 19:00:06 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:00:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 23 19:00:06 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 23 19:00:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 4 op/s
Jan 23 19:00:07 compute-0 ceph-mon[75097]: osdmap e124: 3 total, 3 up, 3 in
Jan 23 19:00:07 compute-0 ceph-mon[75097]: pgmap v883: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 4 op/s
Jan 23 19:00:08 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a0a8215b-eedb-45d9-a08e-7e13600761d5", "format": "json"}]: dispatch
Jan 23 19:00:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a0a8215b-eedb-45d9-a08e-7e13600761d5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:00:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a0a8215b-eedb-45d9-a08e-7e13600761d5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:00:08 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a0a8215b-eedb-45d9-a08e-7e13600761d5' of type subvolume
Jan 23 19:00:08 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:08.358+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a0a8215b-eedb-45d9-a08e-7e13600761d5' of type subvolume
Jan 23 19:00:08 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a0a8215b-eedb-45d9-a08e-7e13600761d5", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a0a8215b-eedb-45d9-a08e-7e13600761d5, vol_name:cephfs) < ""
Jan 23 19:00:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a0a8215b-eedb-45d9-a08e-7e13600761d5'' moved to trashcan
Jan 23 19:00:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:00:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a0a8215b-eedb-45d9-a08e-7e13600761d5, vol_name:cephfs) < ""
Jan 23 19:00:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 4 op/s
Jan 23 19:00:08 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "a8a8bc57-0cf3-40b2-af69-e9b0e686c548", "new_size": 2147483648, "format": "json"}]: dispatch
Jan 23 19:00:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:a8a8bc57-0cf3-40b2-af69-e9b0e686c548, vol_name:cephfs) < ""
Jan 23 19:00:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:a8a8bc57-0cf3-40b2-af69-e9b0e686c548, vol_name:cephfs) < ""
Jan 23 19:00:08 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a0a8215b-eedb-45d9-a08e-7e13600761d5", "format": "json"}]: dispatch
Jan 23 19:00:08 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a0a8215b-eedb-45d9-a08e-7e13600761d5", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:08 compute-0 ceph-mon[75097]: pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 4 op/s
Jan 23 19:00:08 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "a8a8bc57-0cf3-40b2-af69-e9b0e686c548", "new_size": 2147483648, "format": "json"}]: dispatch
Jan 23 19:00:09 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:00:09.487 155616 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d273cd83-f2f8-4b91-b18f-957051ff2a6c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 19:00:09 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a8a8bc57-0cf3-40b2-af69-e9b0e686c548", "format": "json"}]: dispatch
Jan 23 19:00:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a8a8bc57-0cf3-40b2-af69-e9b0e686c548, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:00:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a8a8bc57-0cf3-40b2-af69-e9b0e686c548, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:00:09 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a8a8bc57-0cf3-40b2-af69-e9b0e686c548' of type subvolume
Jan 23 19:00:09 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:09.635+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a8a8bc57-0cf3-40b2-af69-e9b0e686c548' of type subvolume
Jan 23 19:00:09 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a8a8bc57-0cf3-40b2-af69-e9b0e686c548", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a8a8bc57-0cf3-40b2-af69-e9b0e686c548, vol_name:cephfs) < ""
Jan 23 19:00:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a8a8bc57-0cf3-40b2-af69-e9b0e686c548'' moved to trashcan
Jan 23 19:00:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:00:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a8a8bc57-0cf3-40b2-af69-e9b0e686c548, vol_name:cephfs) < ""
Jan 23 19:00:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 20 KiB/s wr, 6 op/s
Jan 23 19:00:10 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a8a8bc57-0cf3-40b2-af69-e9b0e686c548", "format": "json"}]: dispatch
Jan 23 19:00:10 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a8a8bc57-0cf3-40b2-af69-e9b0e686c548", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:10 compute-0 ceph-mon[75097]: pgmap v885: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 20 KiB/s wr, 6 op/s
Jan 23 19:00:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 19:00:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 23 19:00:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 23 19:00:10 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 23 19:00:12 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:00:12 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba, vol_name:cephfs) < ""
Jan 23 19:00:12 compute-0 ceph-mon[75097]: osdmap e125: 3 total, 3 up, 3 in
Jan 23 19:00:12 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba/450b6288-d44e-4784-afc8-35081d573dbf'.
Jan 23 19:00:12 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba/.meta.tmp'
Jan 23 19:00:12 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba/.meta.tmp' to config b'/volumes/_nogroup/9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba/.meta'
Jan 23 19:00:12 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba, vol_name:cephfs) < ""
Jan 23 19:00:12 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba", "format": "json"}]: dispatch
Jan 23 19:00:12 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba, vol_name:cephfs) < ""
Jan 23 19:00:12 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba, vol_name:cephfs) < ""
Jan 23 19:00:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:00:12 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:00:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 18 KiB/s wr, 5 op/s
Jan 23 19:00:13 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "049e69f2-19e1-41e8-b13d-2d953d142e3c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:00:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:049e69f2-19e1-41e8-b13d-2d953d142e3c, vol_name:cephfs) < ""
Jan 23 19:00:13 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/049e69f2-19e1-41e8-b13d-2d953d142e3c/76a3c7d0-7076-4941-abae-fe7b4a268443'.
Jan 23 19:00:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/049e69f2-19e1-41e8-b13d-2d953d142e3c/.meta.tmp'
Jan 23 19:00:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/049e69f2-19e1-41e8-b13d-2d953d142e3c/.meta.tmp' to config b'/volumes/_nogroup/049e69f2-19e1-41e8-b13d-2d953d142e3c/.meta'
Jan 23 19:00:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:049e69f2-19e1-41e8-b13d-2d953d142e3c, vol_name:cephfs) < ""
Jan 23 19:00:13 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "049e69f2-19e1-41e8-b13d-2d953d142e3c", "format": "json"}]: dispatch
Jan 23 19:00:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:049e69f2-19e1-41e8-b13d-2d953d142e3c, vol_name:cephfs) < ""
Jan 23 19:00:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:049e69f2-19e1-41e8-b13d-2d953d142e3c, vol_name:cephfs) < ""
Jan 23 19:00:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:00:13 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:00:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:00:13.581 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:00:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:00:13.583 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:00:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:00:13.583 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:00:13 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:00:13 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba", "format": "json"}]: dispatch
Jan 23 19:00:13 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:00:13 compute-0 ceph-mon[75097]: pgmap v887: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 18 KiB/s wr, 5 op/s
Jan 23 19:00:13 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:00:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 745 B/s rd, 33 KiB/s wr, 10 op/s
Jan 23 19:00:14 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "049e69f2-19e1-41e8-b13d-2d953d142e3c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:00:14 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "049e69f2-19e1-41e8-b13d-2d953d142e3c", "format": "json"}]: dispatch
Jan 23 19:00:14 compute-0 ceph-mon[75097]: pgmap v888: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 745 B/s rd, 33 KiB/s wr, 10 op/s
Jan 23 19:00:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 19:00:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 27 KiB/s wr, 8 op/s
Jan 23 19:00:16 compute-0 ceph-mon[75097]: pgmap v889: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 27 KiB/s wr, 8 op/s
Jan 23 19:00:17 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "049e69f2-19e1-41e8-b13d-2d953d142e3c", "format": "json"}]: dispatch
Jan 23 19:00:17 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:049e69f2-19e1-41e8-b13d-2d953d142e3c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:00:17 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:049e69f2-19e1-41e8-b13d-2d953d142e3c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:00:17 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '049e69f2-19e1-41e8-b13d-2d953d142e3c' of type subvolume
Jan 23 19:00:17 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:17.095+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '049e69f2-19e1-41e8-b13d-2d953d142e3c' of type subvolume
Jan 23 19:00:17 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "049e69f2-19e1-41e8-b13d-2d953d142e3c", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:17 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:049e69f2-19e1-41e8-b13d-2d953d142e3c, vol_name:cephfs) < ""
Jan 23 19:00:17 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/049e69f2-19e1-41e8-b13d-2d953d142e3c'' moved to trashcan
Jan 23 19:00:17 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:00:17 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:049e69f2-19e1-41e8-b13d-2d953d142e3c, vol_name:cephfs) < ""
Jan 23 19:00:17 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "049e69f2-19e1-41e8-b13d-2d953d142e3c", "format": "json"}]: dispatch
Jan 23 19:00:17 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "049e69f2-19e1-41e8-b13d-2d953d142e3c", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 27 KiB/s wr, 8 op/s
Jan 23 19:00:18 compute-0 sudo[244765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:00:18 compute-0 sudo[244765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:00:18 compute-0 sudo[244765]: pam_unix(sudo:session): session closed for user root
Jan 23 19:00:18 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba", "format": "json"}]: dispatch
Jan 23 19:00:18 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:00:18 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:00:18 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba' of type subvolume
Jan 23 19:00:18 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:18.704+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba' of type subvolume
Jan 23 19:00:18 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:18 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba, vol_name:cephfs) < ""
Jan 23 19:00:18 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba'' moved to trashcan
Jan 23 19:00:18 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:00:18 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba, vol_name:cephfs) < ""
Jan 23 19:00:18 compute-0 sudo[244790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 19:00:18 compute-0 sudo[244790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:00:19 compute-0 ceph-mon[75097]: pgmap v890: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 27 KiB/s wr, 8 op/s
Jan 23 19:00:19 compute-0 sudo[244790]: pam_unix(sudo:session): session closed for user root
Jan 23 19:00:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:00:19 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:00:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 19:00:19 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:00:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 19:00:19 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:00:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 19:00:19 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:00:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 19:00:19 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:00:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:00:19 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:00:19 compute-0 sudo[244846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:00:19 compute-0 sudo[244846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:00:19 compute-0 sudo[244846]: pam_unix(sudo:session): session closed for user root
Jan 23 19:00:19 compute-0 sudo[244871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 19:00:19 compute-0 sudo[244871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:00:19 compute-0 podman[244909]: 2026-01-23 19:00:19.906021657 +0000 UTC m=+0.043232326 container create 6626103e50afd756371ecce864cd863a3a4c93922844be13d2e463b040044447 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 19:00:19 compute-0 systemd[1]: Started libpod-conmon-6626103e50afd756371ecce864cd863a3a4c93922844be13d2e463b040044447.scope.
Jan 23 19:00:19 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:00:19 compute-0 podman[244909]: 2026-01-23 19:00:19.886345457 +0000 UTC m=+0.023556146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:00:19 compute-0 podman[244909]: 2026-01-23 19:00:19.990896339 +0000 UTC m=+0.128107018 container init 6626103e50afd756371ecce864cd863a3a4c93922844be13d2e463b040044447 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lehmann, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 19:00:19 compute-0 podman[244909]: 2026-01-23 19:00:19.998079295 +0000 UTC m=+0.135289964 container start 6626103e50afd756371ecce864cd863a3a4c93922844be13d2e463b040044447 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lehmann, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:00:20 compute-0 podman[244909]: 2026-01-23 19:00:20.001901708 +0000 UTC m=+0.139112397 container attach 6626103e50afd756371ecce864cd863a3a4c93922844be13d2e463b040044447 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 23 19:00:20 compute-0 clever_lehmann[244925]: 167 167
Jan 23 19:00:20 compute-0 systemd[1]: libpod-6626103e50afd756371ecce864cd863a3a4c93922844be13d2e463b040044447.scope: Deactivated successfully.
Jan 23 19:00:20 compute-0 podman[244909]: 2026-01-23 19:00:20.003537098 +0000 UTC m=+0.140747787 container died 6626103e50afd756371ecce864cd863a3a4c93922844be13d2e463b040044447 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lehmann, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 19:00:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-0565efe0ace9975cbddd61d5400ef823646b7ee1d501d3c802850d639969f9eb-merged.mount: Deactivated successfully.
Jan 23 19:00:20 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba", "format": "json"}]: dispatch
Jan 23 19:00:20 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9ba675d4-570c-4cbb-af0d-aeb9f85aa3ba", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:20 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:00:20 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:00:20 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:00:20 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:00:20 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:00:20 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:00:20 compute-0 podman[244909]: 2026-01-23 19:00:20.052822501 +0000 UTC m=+0.190033170 container remove 6626103e50afd756371ecce864cd863a3a4c93922844be13d2e463b040044447 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 19:00:20 compute-0 systemd[1]: libpod-conmon-6626103e50afd756371ecce864cd863a3a4c93922844be13d2e463b040044447.scope: Deactivated successfully.
Jan 23 19:00:20 compute-0 podman[244947]: 2026-01-23 19:00:20.2120424 +0000 UTC m=+0.037712262 container create f0b6272723e15f2a6ab0b2cb391ccf482d799a9accfe4133d3220714cc52a695 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 19:00:20 compute-0 systemd[1]: Started libpod-conmon-f0b6272723e15f2a6ab0b2cb391ccf482d799a9accfe4133d3220714cc52a695.scope.
Jan 23 19:00:20 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfccedf93eada4a5e826b92816c5790b8b5affd7b12a2ab6a1fcc05c551ec74e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfccedf93eada4a5e826b92816c5790b8b5affd7b12a2ab6a1fcc05c551ec74e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfccedf93eada4a5e826b92816c5790b8b5affd7b12a2ab6a1fcc05c551ec74e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfccedf93eada4a5e826b92816c5790b8b5affd7b12a2ab6a1fcc05c551ec74e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfccedf93eada4a5e826b92816c5790b8b5affd7b12a2ab6a1fcc05c551ec74e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 19:00:20 compute-0 podman[244947]: 2026-01-23 19:00:20.279373294 +0000 UTC m=+0.105043186 container init f0b6272723e15f2a6ab0b2cb391ccf482d799a9accfe4133d3220714cc52a695 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_shirley, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 19:00:20 compute-0 podman[244947]: 2026-01-23 19:00:20.28698718 +0000 UTC m=+0.112657052 container start f0b6272723e15f2a6ab0b2cb391ccf482d799a9accfe4133d3220714cc52a695 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_shirley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:00:20 compute-0 podman[244947]: 2026-01-23 19:00:20.194983123 +0000 UTC m=+0.020653025 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:00:20 compute-0 podman[244947]: 2026-01-23 19:00:20.291533731 +0000 UTC m=+0.117203633 container attach f0b6272723e15f2a6ab0b2cb391ccf482d799a9accfe4133d3220714cc52a695 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:00:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 18 KiB/s wr, 5 op/s
Jan 23 19:00:20 compute-0 hungry_shirley[244963]: --> passed data devices: 0 physical, 3 LVM
Jan 23 19:00:20 compute-0 hungry_shirley[244963]: --> All data devices are unavailable
Jan 23 19:00:20 compute-0 systemd[1]: libpod-f0b6272723e15f2a6ab0b2cb391ccf482d799a9accfe4133d3220714cc52a695.scope: Deactivated successfully.
Jan 23 19:00:20 compute-0 podman[244947]: 2026-01-23 19:00:20.761781954 +0000 UTC m=+0.587451846 container died f0b6272723e15f2a6ab0b2cb391ccf482d799a9accfe4133d3220714cc52a695 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 19:00:20 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e378cb20-e217-4231-960b-f50749860b80", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:00:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e378cb20-e217-4231-960b-f50749860b80, vol_name:cephfs) < ""
Jan 23 19:00:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfccedf93eada4a5e826b92816c5790b8b5affd7b12a2ab6a1fcc05c551ec74e-merged.mount: Deactivated successfully.
Jan 23 19:00:20 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/e378cb20-e217-4231-960b-f50749860b80/13537e84-a011-4234-ada7-6aaf18524ea7'.
Jan 23 19:00:20 compute-0 podman[244947]: 2026-01-23 19:00:20.808843192 +0000 UTC m=+0.634513064 container remove f0b6272723e15f2a6ab0b2cb391ccf482d799a9accfe4133d3220714cc52a695 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:00:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e378cb20-e217-4231-960b-f50749860b80/.meta.tmp'
Jan 23 19:00:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e378cb20-e217-4231-960b-f50749860b80/.meta.tmp' to config b'/volumes/_nogroup/e378cb20-e217-4231-960b-f50749860b80/.meta'
Jan 23 19:00:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e378cb20-e217-4231-960b-f50749860b80, vol_name:cephfs) < ""
Jan 23 19:00:20 compute-0 systemd[1]: libpod-conmon-f0b6272723e15f2a6ab0b2cb391ccf482d799a9accfe4133d3220714cc52a695.scope: Deactivated successfully.
Jan 23 19:00:20 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e378cb20-e217-4231-960b-f50749860b80", "format": "json"}]: dispatch
Jan 23 19:00:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e378cb20-e217-4231-960b-f50749860b80, vol_name:cephfs) < ""
Jan 23 19:00:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e378cb20-e217-4231-960b-f50749860b80, vol_name:cephfs) < ""
Jan 23 19:00:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:00:20 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:00:20 compute-0 sudo[244871]: pam_unix(sudo:session): session closed for user root
Jan 23 19:00:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 19:00:20 compute-0 sudo[244995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:00:20 compute-0 sudo[244995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:00:20 compute-0 sudo[244995]: pam_unix(sudo:session): session closed for user root
Jan 23 19:00:20 compute-0 sudo[245020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 19:00:20 compute-0 sudo[245020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:00:21 compute-0 ceph-mon[75097]: pgmap v891: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 18 KiB/s wr, 5 op/s
Jan 23 19:00:21 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:00:21 compute-0 podman[245057]: 2026-01-23 19:00:21.26329128 +0000 UTC m=+0.040268575 container create 5d49d2a8117592d6b8a796a69c40c43ffd5f26456b5a067d88b6d0ddfa61e2cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pike, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 23 19:00:21 compute-0 systemd[1]: Started libpod-conmon-5d49d2a8117592d6b8a796a69c40c43ffd5f26456b5a067d88b6d0ddfa61e2cb.scope.
Jan 23 19:00:21 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:00:21 compute-0 podman[245057]: 2026-01-23 19:00:21.246245714 +0000 UTC m=+0.023223029 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:00:21 compute-0 podman[245057]: 2026-01-23 19:00:21.342980996 +0000 UTC m=+0.119958311 container init 5d49d2a8117592d6b8a796a69c40c43ffd5f26456b5a067d88b6d0ddfa61e2cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:00:21 compute-0 podman[245057]: 2026-01-23 19:00:21.351403111 +0000 UTC m=+0.128380416 container start 5d49d2a8117592d6b8a796a69c40c43ffd5f26456b5a067d88b6d0ddfa61e2cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 23 19:00:21 compute-0 dreamy_pike[245073]: 167 167
Jan 23 19:00:21 compute-0 systemd[1]: libpod-5d49d2a8117592d6b8a796a69c40c43ffd5f26456b5a067d88b6d0ddfa61e2cb.scope: Deactivated successfully.
Jan 23 19:00:21 compute-0 podman[245057]: 2026-01-23 19:00:21.356152067 +0000 UTC m=+0.133129362 container attach 5d49d2a8117592d6b8a796a69c40c43ffd5f26456b5a067d88b6d0ddfa61e2cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pike, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:00:21 compute-0 conmon[245073]: conmon 5d49d2a8117592d6b8a7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d49d2a8117592d6b8a796a69c40c43ffd5f26456b5a067d88b6d0ddfa61e2cb.scope/container/memory.events
Jan 23 19:00:21 compute-0 podman[245057]: 2026-01-23 19:00:21.358007253 +0000 UTC m=+0.134984558 container died 5d49d2a8117592d6b8a796a69c40c43ffd5f26456b5a067d88b6d0ddfa61e2cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pike, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 19:00:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4045f55bf24257d61c4d831b2377ac801abb901c080ae6219c108b80e13c8f67-merged.mount: Deactivated successfully.
Jan 23 19:00:21 compute-0 podman[245057]: 2026-01-23 19:00:21.404907988 +0000 UTC m=+0.181885283 container remove 5d49d2a8117592d6b8a796a69c40c43ffd5f26456b5a067d88b6d0ddfa61e2cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:00:21 compute-0 systemd[1]: libpod-conmon-5d49d2a8117592d6b8a796a69c40c43ffd5f26456b5a067d88b6d0ddfa61e2cb.scope: Deactivated successfully.
Jan 23 19:00:21 compute-0 podman[245096]: 2026-01-23 19:00:21.562104946 +0000 UTC m=+0.042325474 container create 051c961c080ba171ff569478edb9faf8fc593f0cdbf031d53666fd0db49daad4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:00:21 compute-0 systemd[1]: Started libpod-conmon-051c961c080ba171ff569478edb9faf8fc593f0cdbf031d53666fd0db49daad4.scope.
Jan 23 19:00:21 compute-0 podman[245096]: 2026-01-23 19:00:21.543678396 +0000 UTC m=+0.023898964 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:00:21 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c255a9e3e745dfcaa8dcc9626c26e14754b6e3c9d1398b8210ae3f923a82d2cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c255a9e3e745dfcaa8dcc9626c26e14754b6e3c9d1398b8210ae3f923a82d2cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c255a9e3e745dfcaa8dcc9626c26e14754b6e3c9d1398b8210ae3f923a82d2cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c255a9e3e745dfcaa8dcc9626c26e14754b6e3c9d1398b8210ae3f923a82d2cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:00:21 compute-0 podman[245096]: 2026-01-23 19:00:21.663774279 +0000 UTC m=+0.143994817 container init 051c961c080ba171ff569478edb9faf8fc593f0cdbf031d53666fd0db49daad4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_banzai, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:00:21 compute-0 podman[245096]: 2026-01-23 19:00:21.672673086 +0000 UTC m=+0.152893634 container start 051c961c080ba171ff569478edb9faf8fc593f0cdbf031d53666fd0db49daad4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:00:21 compute-0 podman[245096]: 2026-01-23 19:00:21.677045944 +0000 UTC m=+0.157266502 container attach 051c961c080ba171ff569478edb9faf8fc593f0cdbf031d53666fd0db49daad4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_banzai, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 19:00:21 compute-0 cool_banzai[245113]: {
Jan 23 19:00:21 compute-0 cool_banzai[245113]:     "0": [
Jan 23 19:00:21 compute-0 cool_banzai[245113]:         {
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "devices": [
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "/dev/loop3"
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             ],
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "lv_name": "ceph_lv0",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "lv_size": "21470642176",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "name": "ceph_lv0",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "tags": {
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.cluster_name": "ceph",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.crush_device_class": "",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.encrypted": "0",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.objectstore": "bluestore",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.osd_id": "0",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.type": "block",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.vdo": "0",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.with_tpm": "0"
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             },
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "type": "block",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "vg_name": "ceph_vg0"
Jan 23 19:00:21 compute-0 cool_banzai[245113]:         }
Jan 23 19:00:21 compute-0 cool_banzai[245113]:     ],
Jan 23 19:00:21 compute-0 cool_banzai[245113]:     "1": [
Jan 23 19:00:21 compute-0 cool_banzai[245113]:         {
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "devices": [
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "/dev/loop4"
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             ],
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "lv_name": "ceph_lv1",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "lv_size": "21470642176",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "name": "ceph_lv1",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "tags": {
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.cluster_name": "ceph",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.crush_device_class": "",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.encrypted": "0",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.objectstore": "bluestore",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.osd_id": "1",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.type": "block",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.vdo": "0",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.with_tpm": "0"
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             },
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "type": "block",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "vg_name": "ceph_vg1"
Jan 23 19:00:21 compute-0 cool_banzai[245113]:         }
Jan 23 19:00:21 compute-0 cool_banzai[245113]:     ],
Jan 23 19:00:21 compute-0 cool_banzai[245113]:     "2": [
Jan 23 19:00:21 compute-0 cool_banzai[245113]:         {
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "devices": [
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "/dev/loop5"
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             ],
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "lv_name": "ceph_lv2",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "lv_size": "21470642176",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "name": "ceph_lv2",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "tags": {
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.cluster_name": "ceph",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.crush_device_class": "",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.encrypted": "0",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.objectstore": "bluestore",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.osd_id": "2",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.type": "block",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.vdo": "0",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:                 "ceph.with_tpm": "0"
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             },
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "type": "block",
Jan 23 19:00:21 compute-0 cool_banzai[245113]:             "vg_name": "ceph_vg2"
Jan 23 19:00:21 compute-0 cool_banzai[245113]:         }
Jan 23 19:00:21 compute-0 cool_banzai[245113]:     ]
Jan 23 19:00:21 compute-0 cool_banzai[245113]: }
Jan 23 19:00:21 compute-0 systemd[1]: libpod-051c961c080ba171ff569478edb9faf8fc593f0cdbf031d53666fd0db49daad4.scope: Deactivated successfully.
Jan 23 19:00:21 compute-0 conmon[245113]: conmon 051c961c080ba171ff56 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-051c961c080ba171ff569478edb9faf8fc593f0cdbf031d53666fd0db49daad4.scope/container/memory.events
Jan 23 19:00:21 compute-0 podman[245096]: 2026-01-23 19:00:21.975670215 +0000 UTC m=+0.455890763 container died 051c961c080ba171ff569478edb9faf8fc593f0cdbf031d53666fd0db49daad4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c255a9e3e745dfcaa8dcc9626c26e14754b6e3c9d1398b8210ae3f923a82d2cd-merged.mount: Deactivated successfully.
Jan 23 19:00:22 compute-0 podman[245096]: 2026-01-23 19:00:22.02050713 +0000 UTC m=+0.500727668 container remove 051c961c080ba171ff569478edb9faf8fc593f0cdbf031d53666fd0db49daad4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_banzai, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:00:22 compute-0 systemd[1]: libpod-conmon-051c961c080ba171ff569478edb9faf8fc593f0cdbf031d53666fd0db49daad4.scope: Deactivated successfully.
Jan 23 19:00:22 compute-0 sudo[245020]: pam_unix(sudo:session): session closed for user root
Jan 23 19:00:22 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e378cb20-e217-4231-960b-f50749860b80", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:00:22 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e378cb20-e217-4231-960b-f50749860b80", "format": "json"}]: dispatch
Jan 23 19:00:22 compute-0 sudo[245133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:00:22 compute-0 sudo[245133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:00:22 compute-0 sudo[245133]: pam_unix(sudo:session): session closed for user root
Jan 23 19:00:22 compute-0 sudo[245158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 19:00:22 compute-0 sudo[245158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:00:22 compute-0 podman[245193]: 2026-01-23 19:00:22.463507337 +0000 UTC m=+0.038603673 container create 35f1e8fb561337b7522356755fb725fe64fca6b3a61e3e0eb9003f9352a33969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 19:00:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 264 B/s rd, 16 KiB/s wr, 5 op/s
Jan 23 19:00:22 compute-0 systemd[1]: Started libpod-conmon-35f1e8fb561337b7522356755fb725fe64fca6b3a61e3e0eb9003f9352a33969.scope.
Jan 23 19:00:22 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:00:22 compute-0 podman[245193]: 2026-01-23 19:00:22.540976169 +0000 UTC m=+0.116072535 container init 35f1e8fb561337b7522356755fb725fe64fca6b3a61e3e0eb9003f9352a33969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Jan 23 19:00:22 compute-0 podman[245193]: 2026-01-23 19:00:22.448129062 +0000 UTC m=+0.023225418 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:00:22 compute-0 podman[245193]: 2026-01-23 19:00:22.548611235 +0000 UTC m=+0.123707571 container start 35f1e8fb561337b7522356755fb725fe64fca6b3a61e3e0eb9003f9352a33969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lumiere, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 19:00:22 compute-0 lucid_lumiere[245209]: 167 167
Jan 23 19:00:22 compute-0 systemd[1]: libpod-35f1e8fb561337b7522356755fb725fe64fca6b3a61e3e0eb9003f9352a33969.scope: Deactivated successfully.
Jan 23 19:00:22 compute-0 podman[245193]: 2026-01-23 19:00:22.552791118 +0000 UTC m=+0.127887484 container attach 35f1e8fb561337b7522356755fb725fe64fca6b3a61e3e0eb9003f9352a33969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lumiere, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:00:22 compute-0 podman[245214]: 2026-01-23 19:00:22.590387066 +0000 UTC m=+0.023775522 container died 35f1e8fb561337b7522356755fb725fe64fca6b3a61e3e0eb9003f9352a33969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9bf206584de7fd10e4a239d55b0a2c2beef287dfff3daaa695878ebf04e008a-merged.mount: Deactivated successfully.
Jan 23 19:00:22 compute-0 podman[245214]: 2026-01-23 19:00:22.89729373 +0000 UTC m=+0.330682216 container remove 35f1e8fb561337b7522356755fb725fe64fca6b3a61e3e0eb9003f9352a33969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:00:22 compute-0 systemd[1]: libpod-conmon-35f1e8fb561337b7522356755fb725fe64fca6b3a61e3e0eb9003f9352a33969.scope: Deactivated successfully.
Jan 23 19:00:23 compute-0 ceph-mon[75097]: pgmap v892: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 264 B/s rd, 16 KiB/s wr, 5 op/s
Jan 23 19:00:23 compute-0 podman[245236]: 2026-01-23 19:00:23.105706129 +0000 UTC m=+0.045309647 container create 3de6491e193eec78f455e8ebdc63a0cbe2cb1deca54bf9ae92d9e03a048097e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 19:00:23 compute-0 systemd[1]: Started libpod-conmon-3de6491e193eec78f455e8ebdc63a0cbe2cb1deca54bf9ae92d9e03a048097e2.scope.
Jan 23 19:00:23 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0a23a824d76e991232a6acf377315b3ce32efa287a60c3702014ccb487fb32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0a23a824d76e991232a6acf377315b3ce32efa287a60c3702014ccb487fb32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0a23a824d76e991232a6acf377315b3ce32efa287a60c3702014ccb487fb32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0a23a824d76e991232a6acf377315b3ce32efa287a60c3702014ccb487fb32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:00:23 compute-0 podman[245236]: 2026-01-23 19:00:23.088048627 +0000 UTC m=+0.027652165 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:00:23 compute-0 podman[245236]: 2026-01-23 19:00:23.194194579 +0000 UTC m=+0.133798127 container init 3de6491e193eec78f455e8ebdc63a0cbe2cb1deca54bf9ae92d9e03a048097e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:00:23 compute-0 podman[245236]: 2026-01-23 19:00:23.201037117 +0000 UTC m=+0.140640635 container start 3de6491e193eec78f455e8ebdc63a0cbe2cb1deca54bf9ae92d9e03a048097e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_easley, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:00:23 compute-0 podman[245236]: 2026-01-23 19:00:23.205031374 +0000 UTC m=+0.144634912 container attach 3de6491e193eec78f455e8ebdc63a0cbe2cb1deca54bf9ae92d9e03a048097e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:00:23 compute-0 lvm[245331]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:00:23 compute-0 lvm[245330]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:00:23 compute-0 lvm[245331]: VG ceph_vg1 finished
Jan 23 19:00:23 compute-0 lvm[245330]: VG ceph_vg0 finished
Jan 23 19:00:23 compute-0 lvm[245333]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:00:23 compute-0 lvm[245333]: VG ceph_vg2 finished
Jan 23 19:00:24 compute-0 lucid_easley[245252]: {}
Jan 23 19:00:24 compute-0 systemd[1]: libpod-3de6491e193eec78f455e8ebdc63a0cbe2cb1deca54bf9ae92d9e03a048097e2.scope: Deactivated successfully.
Jan 23 19:00:24 compute-0 systemd[1]: libpod-3de6491e193eec78f455e8ebdc63a0cbe2cb1deca54bf9ae92d9e03a048097e2.scope: Consumed 1.361s CPU time.
Jan 23 19:00:24 compute-0 podman[245236]: 2026-01-23 19:00:24.029011504 +0000 UTC m=+0.968615022 container died 3de6491e193eec78f455e8ebdc63a0cbe2cb1deca54bf9ae92d9e03a048097e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_easley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 23 19:00:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-be0a23a824d76e991232a6acf377315b3ce32efa287a60c3702014ccb487fb32-merged.mount: Deactivated successfully.
Jan 23 19:00:24 compute-0 podman[245236]: 2026-01-23 19:00:24.080071002 +0000 UTC m=+1.019674520 container remove 3de6491e193eec78f455e8ebdc63a0cbe2cb1deca54bf9ae92d9e03a048097e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_easley, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:00:24 compute-0 systemd[1]: libpod-conmon-3de6491e193eec78f455e8ebdc63a0cbe2cb1deca54bf9ae92d9e03a048097e2.scope: Deactivated successfully.
Jan 23 19:00:24 compute-0 sudo[245158]: pam_unix(sudo:session): session closed for user root
Jan 23 19:00:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:00:24 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:00:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:00:24 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:00:24 compute-0 sudo[245347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 19:00:24 compute-0 sudo[245347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:00:24 compute-0 sudo[245347]: pam_unix(sudo:session): session closed for user root
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e378cb20-e217-4231-960b-f50749860b80", "format": "json"}]: dispatch
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e378cb20-e217-4231-960b-f50749860b80, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e378cb20-e217-4231-960b-f50749860b80, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e378cb20-e217-4231-960b-f50749860b80' of type subvolume
Jan 23 19:00:24 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:24.474+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e378cb20-e217-4231-960b-f50749860b80' of type subvolume
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e378cb20-e217-4231-960b-f50749860b80", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e378cb20-e217-4231-960b-f50749860b80, vol_name:cephfs) < ""
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 21 KiB/s wr, 7 op/s
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e378cb20-e217-4231-960b-f50749860b80'' moved to trashcan
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e378cb20-e217-4231-960b-f50749860b80, vol_name:cephfs) < ""
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e6c475e8-8ebf-4542-a7ab-36860fe78789", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:e6c475e8-8ebf-4542-a7ab-36860fe78789, vol_name:cephfs) < ""
Jan 23 19:00:24 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/e6c475e8-8ebf-4542-a7ab-36860fe78789/4fe40108-7329-4a56-a747-8b44a0e65832'.
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e6c475e8-8ebf-4542-a7ab-36860fe78789/.meta.tmp'
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e6c475e8-8ebf-4542-a7ab-36860fe78789/.meta.tmp' to config b'/volumes/_nogroup/e6c475e8-8ebf-4542-a7ab-36860fe78789/.meta'
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:e6c475e8-8ebf-4542-a7ab-36860fe78789, vol_name:cephfs) < ""
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e6c475e8-8ebf-4542-a7ab-36860fe78789", "format": "json"}]: dispatch
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e6c475e8-8ebf-4542-a7ab-36860fe78789, vol_name:cephfs) < ""
Jan 23 19:00:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e6c475e8-8ebf-4542-a7ab-36860fe78789, vol_name:cephfs) < ""
Jan 23 19:00:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:00:24 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:00:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:00:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:00:25 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e378cb20-e217-4231-960b-f50749860b80", "format": "json"}]: dispatch
Jan 23 19:00:25 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e378cb20-e217-4231-960b-f50749860b80", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:25 compute-0 ceph-mon[75097]: pgmap v893: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 21 KiB/s wr, 7 op/s
Jan 23 19:00:25 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:00:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 19:00:26 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e6c475e8-8ebf-4542-a7ab-36860fe78789", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:00:26 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e6c475e8-8ebf-4542-a7ab-36860fe78789", "format": "json"}]: dispatch
Jan 23 19:00:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 11 KiB/s wr, 4 op/s
Jan 23 19:00:27 compute-0 podman[245372]: 2026-01-23 19:00:27.007439594 +0000 UTC m=+0.088455612 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller)
Jan 23 19:00:27 compute-0 ceph-mon[75097]: pgmap v894: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 11 KiB/s wr, 4 op/s
Jan 23 19:00:28 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "88b13502-f510-4325-a8f5-797343db3fc5", "format": "json"}]: dispatch
Jan 23 19:00:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:88b13502-f510-4325-a8f5-797343db3fc5, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:00:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:88b13502-f510-4325-a8f5-797343db3fc5, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:00:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 10 KiB/s wr, 3 op/s
Jan 23 19:00:28 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a78e7c29-37d2-401f-be9f-e1812f305e12", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:00:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:a78e7c29-37d2-401f-be9f-e1812f305e12, vol_name:cephfs) < ""
Jan 23 19:00:28 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "88b13502-f510-4325-a8f5-797343db3fc5", "format": "json"}]: dispatch
Jan 23 19:00:28 compute-0 ceph-mon[75097]: pgmap v895: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 10 KiB/s wr, 3 op/s
Jan 23 19:00:28 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a78e7c29-37d2-401f-be9f-e1812f305e12/dffda640-1089-4f8a-a2cd-cc394b3690b3'.
Jan 23 19:00:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a78e7c29-37d2-401f-be9f-e1812f305e12/.meta.tmp'
Jan 23 19:00:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a78e7c29-37d2-401f-be9f-e1812f305e12/.meta.tmp' to config b'/volumes/_nogroup/a78e7c29-37d2-401f-be9f-e1812f305e12/.meta'
Jan 23 19:00:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:a78e7c29-37d2-401f-be9f-e1812f305e12, vol_name:cephfs) < ""
Jan 23 19:00:28 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a78e7c29-37d2-401f-be9f-e1812f305e12", "format": "json"}]: dispatch
Jan 23 19:00:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a78e7c29-37d2-401f-be9f-e1812f305e12, vol_name:cephfs) < ""
Jan 23 19:00:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a78e7c29-37d2-401f-be9f-e1812f305e12, vol_name:cephfs) < ""
Jan 23 19:00:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:00:28 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:00:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:00:28
Jan 23 19:00:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:00:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:00:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['backups', 'images', '.mgr', 'vms', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', '.rgw.root']
Jan 23 19:00:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:00:29 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a78e7c29-37d2-401f-be9f-e1812f305e12", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:00:29 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a78e7c29-37d2-401f-be9f-e1812f305e12", "format": "json"}]: dispatch
Jan 23 19:00:29 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:00:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:00:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:00:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:00:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:00:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:00:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:00:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 19 KiB/s wr, 6 op/s
Jan 23 19:00:30 compute-0 ceph-mon[75097]: pgmap v896: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 19 KiB/s wr, 6 op/s
Jan 23 19:00:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 19:00:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:00:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:00:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:00:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:00:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:00:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:00:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:00:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:00:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:00:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "88b13502-f510-4325-a8f5-797343db3fc5_ac550a10-2960-4957-90e7-920e24404d04", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:88b13502-f510-4325-a8f5-797343db3fc5_ac550a10-2960-4957-90e7-920e24404d04, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp'
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp' to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta'
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:88b13502-f510-4325-a8f5-797343db3fc5_ac550a10-2960-4957-90e7-920e24404d04, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "88b13502-f510-4325-a8f5-797343db3fc5", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:88b13502-f510-4325-a8f5-797343db3fc5, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp'
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp' to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta'
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:88b13502-f510-4325-a8f5-797343db3fc5, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 15 KiB/s wr, 5 op/s
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:32.552815) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194832552856, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2127, "num_deletes": 251, "total_data_size": 3677431, "memory_usage": 3780264, "flush_reason": "Manual Compaction"}
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e6c475e8-8ebf-4542-a7ab-36860fe78789", "format": "json"}]: dispatch
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e6c475e8-8ebf-4542-a7ab-36860fe78789, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e6c475e8-8ebf-4542-a7ab-36860fe78789, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e6c475e8-8ebf-4542-a7ab-36860fe78789' of type subvolume
Jan 23 19:00:32 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:32.581+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e6c475e8-8ebf-4542-a7ab-36860fe78789' of type subvolume
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194832584956, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3599402, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16420, "largest_seqno": 18546, "table_properties": {"data_size": 3589699, "index_size": 6133, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20101, "raw_average_key_size": 20, "raw_value_size": 3569978, "raw_average_value_size": 3598, "num_data_blocks": 276, "num_entries": 992, "num_filter_entries": 992, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769194621, "oldest_key_time": 1769194621, "file_creation_time": 1769194832, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 32205 microseconds, and 7739 cpu microseconds.
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e6c475e8-8ebf-4542-a7ab-36860fe78789", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e6c475e8-8ebf-4542-a7ab-36860fe78789, vol_name:cephfs) < ""
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:32.585017) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3599402 bytes OK
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:32.585040) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:32.587522) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:32.587541) EVENT_LOG_v1 {"time_micros": 1769194832587535, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:32.587574) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3668399, prev total WAL file size 3668399, number of live WAL files 2.
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:32.588461) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3515KB)], [38(7834KB)]
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194832588504, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11621668, "oldest_snapshot_seqno": -1}
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e6c475e8-8ebf-4542-a7ab-36860fe78789'' moved to trashcan
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:00:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e6c475e8-8ebf-4542-a7ab-36860fe78789, vol_name:cephfs) < ""
Jan 23 19:00:32 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "88b13502-f510-4325-a8f5-797343db3fc5_ac550a10-2960-4957-90e7-920e24404d04", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:32 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "88b13502-f510-4325-a8f5-797343db3fc5", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:32 compute-0 ceph-mon[75097]: pgmap v897: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 15 KiB/s wr, 5 op/s
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4540 keys, 9828888 bytes, temperature: kUnknown
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194832654177, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9828888, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9794797, "index_size": 21629, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 109979, "raw_average_key_size": 24, "raw_value_size": 9709097, "raw_average_value_size": 2138, "num_data_blocks": 917, "num_entries": 4540, "num_filter_entries": 4540, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 1769194832, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:32.654416) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9828888 bytes
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:32.656313) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.7 rd, 149.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 7.7 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 5061, records dropped: 521 output_compression: NoCompression
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:32.656332) EVENT_LOG_v1 {"time_micros": 1769194832656323, "job": 18, "event": "compaction_finished", "compaction_time_micros": 65757, "compaction_time_cpu_micros": 23583, "output_level": 6, "num_output_files": 1, "total_output_size": 9828888, "num_input_records": 5061, "num_output_records": 4540, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194832657109, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194832658623, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:32.588408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:32.658689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:32.658694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:32.658696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:32.658698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:00:32 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:32.658700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:00:33 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e6c475e8-8ebf-4542-a7ab-36860fe78789", "format": "json"}]: dispatch
Jan 23 19:00:33 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e6c475e8-8ebf-4542-a7ab-36860fe78789", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 25 KiB/s wr, 8 op/s
Jan 23 19:00:34 compute-0 ceph-mon[75097]: pgmap v898: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 25 KiB/s wr, 8 op/s
Jan 23 19:00:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 19:00:35 compute-0 podman[245399]: 2026-01-23 19:00:35.976476122 +0000 UTC m=+0.056524470 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 23 19:00:36 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a78e7c29-37d2-401f-be9f-e1812f305e12", "format": "json"}]: dispatch
Jan 23 19:00:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a78e7c29-37d2-401f-be9f-e1812f305e12, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:00:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a78e7c29-37d2-401f-be9f-e1812f305e12, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:00:36 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:00:36.105+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a78e7c29-37d2-401f-be9f-e1812f305e12' of type subvolume
Jan 23 19:00:36 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a78e7c29-37d2-401f-be9f-e1812f305e12' of type subvolume
Jan 23 19:00:36 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a78e7c29-37d2-401f-be9f-e1812f305e12", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a78e7c29-37d2-401f-be9f-e1812f305e12, vol_name:cephfs) < ""
Jan 23 19:00:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a78e7c29-37d2-401f-be9f-e1812f305e12'' moved to trashcan
Jan 23 19:00:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:00:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a78e7c29-37d2-401f-be9f-e1812f305e12, vol_name:cephfs) < ""
Jan 23 19:00:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:36.381198) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194836381238, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 287, "num_deletes": 251, "total_data_size": 49480, "memory_usage": 56144, "flush_reason": "Manual Compaction"}
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 23 19:00:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 19 KiB/s wr, 5 op/s
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194836678497, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 48847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18547, "largest_seqno": 18833, "table_properties": {"data_size": 46941, "index_size": 134, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5277, "raw_average_key_size": 19, "raw_value_size": 43148, "raw_average_value_size": 158, "num_data_blocks": 6, "num_entries": 273, "num_filter_entries": 273, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769194832, "oldest_key_time": 1769194832, "file_creation_time": 1769194836, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 297337 microseconds, and 844 cpu microseconds.
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:36.678525) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 48847 bytes OK
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:36.678554) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:36.902113) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:36.902178) EVENT_LOG_v1 {"time_micros": 1769194836902167, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:36.902210) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 47340, prev total WAL file size 62383, number of live WAL files 2.
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:00:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:36.902756) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(47KB)], [41(9598KB)]
Jan 23 19:00:36 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194836903106, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9877735, "oldest_snapshot_seqno": -1}
Jan 23 19:00:36 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 23 19:00:37 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4304 keys, 6533858 bytes, temperature: kUnknown
Jan 23 19:00:37 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194837006256, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6533858, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6505936, "index_size": 16037, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 105533, "raw_average_key_size": 24, "raw_value_size": 6428902, "raw_average_value_size": 1493, "num_data_blocks": 672, "num_entries": 4304, "num_filter_entries": 4304, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 1769194836, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:00:37 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:00:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:37.006735) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6533858 bytes
Jan 23 19:00:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:37.210824) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.7 rd, 63.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 9.4 +0.0 blob) out(6.2 +0.0 blob), read-write-amplify(336.0) write-amplify(133.8) OK, records in: 4813, records dropped: 509 output_compression: NoCompression
Jan 23 19:00:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:37.210859) EVENT_LOG_v1 {"time_micros": 1769194837210845, "job": 20, "event": "compaction_finished", "compaction_time_micros": 103221, "compaction_time_cpu_micros": 20386, "output_level": 6, "num_output_files": 1, "total_output_size": 6533858, "num_input_records": 4813, "num_output_records": 4304, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 19:00:37 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:00:37 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194837211044, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 23 19:00:37 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:00:37 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194837212787, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 23 19:00:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:36.902652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:00:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:37.212831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:00:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:37.212836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:00:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:37.212838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:00:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:37.212840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:00:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:00:37.212842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:00:37 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a78e7c29-37d2-401f-be9f-e1812f305e12", "format": "json"}]: dispatch
Jan 23 19:00:37 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a78e7c29-37d2-401f-be9f-e1812f305e12", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:37 compute-0 ceph-mon[75097]: pgmap v899: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 19 KiB/s wr, 5 op/s
Jan 23 19:00:37 compute-0 ceph-mon[75097]: osdmap e126: 3 total, 3 up, 3 in
Jan 23 19:00:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 23 KiB/s wr, 6 op/s
Jan 23 19:00:39 compute-0 ceph-mon[75097]: pgmap v901: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 23 KiB/s wr, 6 op/s
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006658954220435392 of space, bias 1.0, pg target 0.19976862661306177 quantized to 32 (current 32)
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3465068454721318e-05 of space, bias 4.0, pg target 0.01615808214566558 quantized to 16 (current 16)
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:00:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 19 KiB/s wr, 5 op/s
Jan 23 19:00:40 compute-0 ceph-mon[75097]: pgmap v902: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 19 KiB/s wr, 5 op/s
Jan 23 19:00:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 19:00:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 23 19:00:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 23 19:00:41 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 23 19:00:42 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "40099f4c-64b9-4793-91a5-8cec06f6370d", "format": "json"}]: dispatch
Jan 23 19:00:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:40099f4c-64b9-4793-91a5-8cec06f6370d, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:00:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:40099f4c-64b9-4793-91a5-8cec06f6370d, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:00:42 compute-0 ceph-mon[75097]: osdmap e127: 3 total, 3 up, 3 in
Jan 23 19:00:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 9.1 KiB/s wr, 3 op/s
Jan 23 19:00:42 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:00:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, vol_name:cephfs) < ""
Jan 23 19:00:42 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/8c3aacd4-958b-4e71-b998-b587ef3cb3df/894bfdbb-349c-49c6-8970-c5295fcef973'.
Jan 23 19:00:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8c3aacd4-958b-4e71-b998-b587ef3cb3df/.meta.tmp'
Jan 23 19:00:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8c3aacd4-958b-4e71-b998-b587ef3cb3df/.meta.tmp' to config b'/volumes/_nogroup/8c3aacd4-958b-4e71-b998-b587ef3cb3df/.meta'
Jan 23 19:00:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, vol_name:cephfs) < ""
Jan 23 19:00:42 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "format": "json"}]: dispatch
Jan 23 19:00:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, vol_name:cephfs) < ""
Jan 23 19:00:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, vol_name:cephfs) < ""
Jan 23 19:00:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:00:42 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:00:43 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "40099f4c-64b9-4793-91a5-8cec06f6370d", "format": "json"}]: dispatch
Jan 23 19:00:43 compute-0 ceph-mon[75097]: pgmap v904: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 9.1 KiB/s wr, 3 op/s
Jan 23 19:00:43 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:00:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 18 KiB/s wr, 5 op/s
Jan 23 19:00:44 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:00:44 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "format": "json"}]: dispatch
Jan 23 19:00:45 compute-0 ceph-mon[75097]: pgmap v905: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 18 KiB/s wr, 5 op/s
Jan 23 19:00:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 19:00:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 322 B/s rd, 15 KiB/s wr, 5 op/s
Jan 23 19:00:46 compute-0 ceph-mon[75097]: pgmap v906: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 322 B/s rd, 15 KiB/s wr, 5 op/s
Jan 23 19:00:47 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "snap_name": "f6297e64-4bea-4cf5-a7c3-096c342cca94", "format": "json"}]: dispatch
Jan 23 19:00:47 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f6297e64-4bea-4cf5-a7c3-096c342cca94, sub_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, vol_name:cephfs) < ""
Jan 23 19:00:47 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f6297e64-4bea-4cf5-a7c3-096c342cca94, sub_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, vol_name:cephfs) < ""
Jan 23 19:00:47 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "1b7bb815-c96a-4c7f-8325-17c1098513a3", "format": "json"}]: dispatch
Jan 23 19:00:47 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1b7bb815-c96a-4c7f-8325-17c1098513a3, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:00:47 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1b7bb815-c96a-4c7f-8325-17c1098513a3, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:00:47 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "snap_name": "f6297e64-4bea-4cf5-a7c3-096c342cca94", "format": "json"}]: dispatch
Jan 23 19:00:47 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "1b7bb815-c96a-4c7f-8325-17c1098513a3", "format": "json"}]: dispatch
Jan 23 19:00:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 14 KiB/s wr, 4 op/s
Jan 23 19:00:48 compute-0 ceph-mon[75097]: pgmap v907: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 14 KiB/s wr, 4 op/s
Jan 23 19:00:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 2 op/s
Jan 23 19:00:50 compute-0 ceph-mon[75097]: pgmap v908: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 2 op/s
Jan 23 19:00:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 19:00:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 9.3 KiB/s wr, 2 op/s
Jan 23 19:00:52 compute-0 ceph-mon[75097]: pgmap v909: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 9.3 KiB/s wr, 2 op/s
Jan 23 19:00:53 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "snap_name": "ad76e9d3-bc9b-4f07-99a8-9c0edee4549e", "format": "json"}]: dispatch
Jan 23 19:00:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ad76e9d3-bc9b-4f07-99a8-9c0edee4549e, sub_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, vol_name:cephfs) < ""
Jan 23 19:00:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ad76e9d3-bc9b-4f07-99a8-9c0edee4549e, sub_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, vol_name:cephfs) < ""
Jan 23 19:00:54 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "9425dea3-7e92-4e57-96a9-0eeb8d61fea2", "format": "json"}]: dispatch
Jan 23 19:00:54 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9425dea3-7e92-4e57-96a9-0eeb8d61fea2, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:00:54 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9425dea3-7e92-4e57-96a9-0eeb8d61fea2, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:00:54 compute-0 nova_compute[240119]: 2026-01-23 19:00:54.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:00:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 3 op/s
Jan 23 19:00:54 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "snap_name": "ad76e9d3-bc9b-4f07-99a8-9c0edee4549e", "format": "json"}]: dispatch
Jan 23 19:00:54 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "9425dea3-7e92-4e57-96a9-0eeb8d61fea2", "format": "json"}]: dispatch
Jan 23 19:00:54 compute-0 ceph-mon[75097]: pgmap v910: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 3 op/s
Jan 23 19:00:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 19:00:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s wr, 1 op/s
Jan 23 19:00:56 compute-0 ceph-mon[75097]: pgmap v911: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s wr, 1 op/s
Jan 23 19:00:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:00:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3560019337' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:00:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:00:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3560019337' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:00:57 compute-0 nova_compute[240119]: 2026-01-23 19:00:57.123 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:00:57 compute-0 nova_compute[240119]: 2026-01-23 19:00:57.124 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:00:57 compute-0 nova_compute[240119]: 2026-01-23 19:00:57.124 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:00:57 compute-0 nova_compute[240119]: 2026-01-23 19:00:57.124 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:00:57 compute-0 nova_compute[240119]: 2026-01-23 19:00:57.124 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:00:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/3560019337' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:00:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/3560019337' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:00:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:00:57 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3642495468' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:00:57 compute-0 nova_compute[240119]: 2026-01-23 19:00:57.710 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:00:57 compute-0 nova_compute[240119]: 2026-01-23 19:00:57.890 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:00:57 compute-0 nova_compute[240119]: 2026-01-23 19:00:57.892 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5136MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:00:57 compute-0 nova_compute[240119]: 2026-01-23 19:00:57.893 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:00:57 compute-0 nova_compute[240119]: 2026-01-23 19:00:57.893 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:00:58 compute-0 podman[245440]: 2026-01-23 19:00:58.007434816 +0000 UTC m=+0.087176039 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 23 19:00:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s wr, 1 op/s
Jan 23 19:00:58 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3642495468' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:00:58 compute-0 ceph-mon[75097]: pgmap v912: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s wr, 1 op/s
Jan 23 19:00:59 compute-0 nova_compute[240119]: 2026-01-23 19:00:59.063 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:00:59 compute-0 nova_compute[240119]: 2026-01-23 19:00:59.063 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:00:59 compute-0 nova_compute[240119]: 2026-01-23 19:00:59.114 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:00:59 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "7de60689-f2ab-45ae-8aa7-dd2bb230a472", "format": "json"}]: dispatch
Jan 23 19:00:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7de60689-f2ab-45ae-8aa7-dd2bb230a472, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:00:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7de60689-f2ab-45ae-8aa7-dd2bb230a472, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:00:59 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "7de60689-f2ab-45ae-8aa7-dd2bb230a472", "format": "json"}]: dispatch
Jan 23 19:00:59 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "snap_name": "ad76e9d3-bc9b-4f07-99a8-9c0edee4549e_9a5b226f-dbf8-4a92-ba98-cee757f80297", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ad76e9d3-bc9b-4f07-99a8-9c0edee4549e_9a5b226f-dbf8-4a92-ba98-cee757f80297, sub_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, vol_name:cephfs) < ""
Jan 23 19:00:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:00:59 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1346405719' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:00:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8c3aacd4-958b-4e71-b998-b587ef3cb3df/.meta.tmp'
Jan 23 19:00:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8c3aacd4-958b-4e71-b998-b587ef3cb3df/.meta.tmp' to config b'/volumes/_nogroup/8c3aacd4-958b-4e71-b998-b587ef3cb3df/.meta'
Jan 23 19:00:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ad76e9d3-bc9b-4f07-99a8-9c0edee4549e_9a5b226f-dbf8-4a92-ba98-cee757f80297, sub_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, vol_name:cephfs) < ""
Jan 23 19:00:59 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "snap_name": "ad76e9d3-bc9b-4f07-99a8-9c0edee4549e", "force": true, "format": "json"}]: dispatch
Jan 23 19:00:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ad76e9d3-bc9b-4f07-99a8-9c0edee4549e, sub_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, vol_name:cephfs) < ""
Jan 23 19:00:59 compute-0 nova_compute[240119]: 2026-01-23 19:00:59.701 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:00:59 compute-0 nova_compute[240119]: 2026-01-23 19:00:59.705 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:00:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8c3aacd4-958b-4e71-b998-b587ef3cb3df/.meta.tmp'
Jan 23 19:00:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8c3aacd4-958b-4e71-b998-b587ef3cb3df/.meta.tmp' to config b'/volumes/_nogroup/8c3aacd4-958b-4e71-b998-b587ef3cb3df/.meta'
Jan 23 19:00:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ad76e9d3-bc9b-4f07-99a8-9c0edee4549e, sub_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, vol_name:cephfs) < ""
Jan 23 19:01:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:01:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:01:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:01:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:01:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:01:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:01:00 compute-0 nova_compute[240119]: 2026-01-23 19:01:00.363 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:01:00 compute-0 nova_compute[240119]: 2026-01-23 19:01:00.365 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:01:00 compute-0 nova_compute[240119]: 2026-01-23 19:01:00.365 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:01:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s wr, 1 op/s
Jan 23 19:01:00 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "snap_name": "ad76e9d3-bc9b-4f07-99a8-9c0edee4549e_9a5b226f-dbf8-4a92-ba98-cee757f80297", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:00 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1346405719' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:01:00 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "snap_name": "ad76e9d3-bc9b-4f07-99a8-9c0edee4549e", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:00 compute-0 ceph-mon[75097]: pgmap v913: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s wr, 1 op/s
Jan 23 19:01:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 19:01:01 compute-0 nova_compute[240119]: 2026-01-23 19:01:01.360 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:01:01 compute-0 nova_compute[240119]: 2026-01-23 19:01:01.361 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:01:01 compute-0 nova_compute[240119]: 2026-01-23 19:01:01.361 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:01:01 compute-0 nova_compute[240119]: 2026-01-23 19:01:01.361 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:01:01 compute-0 CROND[245489]: (root) CMD (run-parts /etc/cron.hourly)
Jan 23 19:01:01 compute-0 run-parts[245492]: (/etc/cron.hourly) starting 0anacron
Jan 23 19:01:01 compute-0 run-parts[245498]: (/etc/cron.hourly) finished 0anacron
Jan 23 19:01:01 compute-0 CROND[245488]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 23 19:01:01 compute-0 nova_compute[240119]: 2026-01-23 19:01:01.563 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:01:01 compute-0 nova_compute[240119]: 2026-01-23 19:01:01.563 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:01:01 compute-0 nova_compute[240119]: 2026-01-23 19:01:01.563 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:01:01 compute-0 nova_compute[240119]: 2026-01-23 19:01:01.563 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:01:01 compute-0 nova_compute[240119]: 2026-01-23 19:01:01.564 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:01:01 compute-0 nova_compute[240119]: 2026-01-23 19:01:01.564 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:01:01 compute-0 nova_compute[240119]: 2026-01-23 19:01:01.564 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:01:01 compute-0 nova_compute[240119]: 2026-01-23 19:01:01.564 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:01:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s wr, 1 op/s
Jan 23 19:01:02 compute-0 ceph-mon[75097]: pgmap v914: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s wr, 1 op/s
Jan 23 19:01:03 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "snap_name": "f6297e64-4bea-4cf5-a7c3-096c342cca94_8a8f4c34-3602-4c1b-9d6f-90bcadc0a2b7", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:03 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f6297e64-4bea-4cf5-a7c3-096c342cca94_8a8f4c34-3602-4c1b-9d6f-90bcadc0a2b7, sub_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, vol_name:cephfs) < ""
Jan 23 19:01:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8c3aacd4-958b-4e71-b998-b587ef3cb3df/.meta.tmp'
Jan 23 19:01:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8c3aacd4-958b-4e71-b998-b587ef3cb3df/.meta.tmp' to config b'/volumes/_nogroup/8c3aacd4-958b-4e71-b998-b587ef3cb3df/.meta'
Jan 23 19:01:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f6297e64-4bea-4cf5-a7c3-096c342cca94_8a8f4c34-3602-4c1b-9d6f-90bcadc0a2b7, sub_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, vol_name:cephfs) < ""
Jan 23 19:01:04 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "snap_name": "f6297e64-4bea-4cf5-a7c3-096c342cca94", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f6297e64-4bea-4cf5-a7c3-096c342cca94, sub_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, vol_name:cephfs) < ""
Jan 23 19:01:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8c3aacd4-958b-4e71-b998-b587ef3cb3df/.meta.tmp'
Jan 23 19:01:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8c3aacd4-958b-4e71-b998-b587ef3cb3df/.meta.tmp' to config b'/volumes/_nogroup/8c3aacd4-958b-4e71-b998-b587ef3cb3df/.meta'
Jan 23 19:01:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f6297e64-4bea-4cf5-a7c3-096c342cca94, sub_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, vol_name:cephfs) < ""
Jan 23 19:01:04 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "fb3f4422-819c-4771-8d5d-1753533710d7", "format": "json"}]: dispatch
Jan 23 19:01:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fb3f4422-819c-4771-8d5d-1753533710d7, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fb3f4422-819c-4771-8d5d-1753533710d7, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 14 KiB/s wr, 3 op/s
Jan 23 19:01:04 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "snap_name": "f6297e64-4bea-4cf5-a7c3-096c342cca94_8a8f4c34-3602-4c1b-9d6f-90bcadc0a2b7", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:04 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "snap_name": "f6297e64-4bea-4cf5-a7c3-096c342cca94", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:04 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "fb3f4422-819c-4771-8d5d-1753533710d7", "format": "json"}]: dispatch
Jan 23 19:01:04 compute-0 ceph-mon[75097]: pgmap v915: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 14 KiB/s wr, 3 op/s
Jan 23 19:01:05 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "fb3f4422-819c-4771-8d5d-1753533710d7_e9bef8c7-d2b4-47f2-8b32-4f9fbb9e41ec", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fb3f4422-819c-4771-8d5d-1753533710d7_e9bef8c7-d2b4-47f2-8b32-4f9fbb9e41ec, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp'
Jan 23 19:01:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp' to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta'
Jan 23 19:01:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fb3f4422-819c-4771-8d5d-1753533710d7_e9bef8c7-d2b4-47f2-8b32-4f9fbb9e41ec, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:05 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "fb3f4422-819c-4771-8d5d-1753533710d7", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fb3f4422-819c-4771-8d5d-1753533710d7, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp'
Jan 23 19:01:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp' to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta'
Jan 23 19:01:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fb3f4422-819c-4771-8d5d-1753533710d7, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 23 19:01:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 23 19:01:06 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 23 19:01:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 19:01:06 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:01:06.264 155616 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:3d:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:eb:2e:86:1a:4d'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 19:01:06 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:01:06.265 155616 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 19:01:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 11 KiB/s wr, 2 op/s
Jan 23 19:01:06 compute-0 podman[245499]: 2026-01-23 19:01:06.996483025 +0000 UTC m=+0.078632801 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 19:01:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 23 19:01:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 23 19:01:07 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "fb3f4422-819c-4771-8d5d-1753533710d7_e9bef8c7-d2b4-47f2-8b32-4f9fbb9e41ec", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:07 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "fb3f4422-819c-4771-8d5d-1753533710d7", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:07 compute-0 ceph-mon[75097]: osdmap e128: 3 total, 3 up, 3 in
Jan 23 19:01:07 compute-0 ceph-mon[75097]: pgmap v917: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 11 KiB/s wr, 2 op/s
Jan 23 19:01:07 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 23 19:01:08 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "format": "json"}]: dispatch
Jan 23 19:01:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:01:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:01:08 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:01:08.099+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8c3aacd4-958b-4e71-b998-b587ef3cb3df' of type subvolume
Jan 23 19:01:08 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8c3aacd4-958b-4e71-b998-b587ef3cb3df' of type subvolume
Jan 23 19:01:08 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, vol_name:cephfs) < ""
Jan 23 19:01:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8c3aacd4-958b-4e71-b998-b587ef3cb3df'' moved to trashcan
Jan 23 19:01:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:01:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8c3aacd4-958b-4e71-b998-b587ef3cb3df, vol_name:cephfs) < ""
Jan 23 19:01:08 compute-0 ceph-mon[75097]: osdmap e129: 3 total, 3 up, 3 in
Jan 23 19:01:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 12 KiB/s wr, 3 op/s
Jan 23 19:01:09 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "format": "json"}]: dispatch
Jan 23 19:01:09 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8c3aacd4-958b-4e71-b998-b587ef3cb3df", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:09 compute-0 ceph-mon[75097]: pgmap v919: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 12 KiB/s wr, 3 op/s
Jan 23 19:01:09 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "7de60689-f2ab-45ae-8aa7-dd2bb230a472_aaa210cc-1f19-45c2-8b54-f2b45f28d1c9", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7de60689-f2ab-45ae-8aa7-dd2bb230a472_aaa210cc-1f19-45c2-8b54-f2b45f28d1c9, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp'
Jan 23 19:01:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp' to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta'
Jan 23 19:01:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7de60689-f2ab-45ae-8aa7-dd2bb230a472_aaa210cc-1f19-45c2-8b54-f2b45f28d1c9, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:09 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "7de60689-f2ab-45ae-8aa7-dd2bb230a472", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7de60689-f2ab-45ae-8aa7-dd2bb230a472, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp'
Jan 23 19:01:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp' to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta'
Jan 23 19:01:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7de60689-f2ab-45ae-8aa7-dd2bb230a472, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:10 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:01:10.267 155616 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d273cd83-f2f8-4b91-b18f-957051ff2a6c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 19:01:10 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "7de60689-f2ab-45ae-8aa7-dd2bb230a472_aaa210cc-1f19-45c2-8b54-f2b45f28d1c9", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:10 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "7de60689-f2ab-45ae-8aa7-dd2bb230a472", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 45 KiB/s wr, 9 op/s
Jan 23 19:01:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:01:11 compute-0 ceph-mon[75097]: pgmap v920: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 45 KiB/s wr, 9 op/s
Jan 23 19:01:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 33 KiB/s wr, 6 op/s
Jan 23 19:01:12 compute-0 ceph-mon[75097]: pgmap v921: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 33 KiB/s wr, 6 op/s
Jan 23 19:01:13 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "9425dea3-7e92-4e57-96a9-0eeb8d61fea2_dd32eeec-5a8f-4c4e-9ab5-8669e63515c5", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9425dea3-7e92-4e57-96a9-0eeb8d61fea2_dd32eeec-5a8f-4c4e-9ab5-8669e63515c5, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp'
Jan 23 19:01:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp' to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta'
Jan 23 19:01:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9425dea3-7e92-4e57-96a9-0eeb8d61fea2_dd32eeec-5a8f-4c4e-9ab5-8669e63515c5, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:13 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "9425dea3-7e92-4e57-96a9-0eeb8d61fea2", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9425dea3-7e92-4e57-96a9-0eeb8d61fea2, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp'
Jan 23 19:01:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp' to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta'
Jan 23 19:01:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9425dea3-7e92-4e57-96a9-0eeb8d61fea2, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:01:13.582 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:01:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:01:13.582 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:01:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:01:13.583 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:01:13 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "9425dea3-7e92-4e57-96a9-0eeb8d61fea2_dd32eeec-5a8f-4c4e-9ab5-8669e63515c5", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:13 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "9425dea3-7e92-4e57-96a9-0eeb8d61fea2", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 981 B/s rd, 58 KiB/s wr, 10 op/s
Jan 23 19:01:14 compute-0 ceph-mon[75097]: pgmap v922: 305 pgs: 305 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 981 B/s rd, 58 KiB/s wr, 10 op/s
Jan 23 19:01:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 23 19:01:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 23 19:01:16 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 23 19:01:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:01:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 23 19:01:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 23 19:01:16 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 23 19:01:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 61 KiB/s wr, 11 op/s
Jan 23 19:01:16 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "1b7bb815-c96a-4c7f-8325-17c1098513a3_c09bf33b-68fc-4afe-921b-aa0d9f3d7960", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1b7bb815-c96a-4c7f-8325-17c1098513a3_c09bf33b-68fc-4afe-921b-aa0d9f3d7960, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp'
Jan 23 19:01:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp' to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta'
Jan 23 19:01:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1b7bb815-c96a-4c7f-8325-17c1098513a3_c09bf33b-68fc-4afe-921b-aa0d9f3d7960, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:16 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "1b7bb815-c96a-4c7f-8325-17c1098513a3", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1b7bb815-c96a-4c7f-8325-17c1098513a3, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp'
Jan 23 19:01:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp' to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta'
Jan 23 19:01:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1b7bb815-c96a-4c7f-8325-17c1098513a3, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:17 compute-0 ceph-mon[75097]: osdmap e130: 3 total, 3 up, 3 in
Jan 23 19:01:17 compute-0 ceph-mon[75097]: osdmap e131: 3 total, 3 up, 3 in
Jan 23 19:01:17 compute-0 ceph-mon[75097]: pgmap v925: 305 pgs: 305 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 61 KiB/s wr, 11 op/s
Jan 23 19:01:18 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "1b7bb815-c96a-4c7f-8325-17c1098513a3_c09bf33b-68fc-4afe-921b-aa0d9f3d7960", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:18 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "1b7bb815-c96a-4c7f-8325-17c1098513a3", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 28 KiB/s wr, 5 op/s
Jan 23 19:01:19 compute-0 ceph-mon[75097]: pgmap v926: 305 pgs: 305 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 28 KiB/s wr, 5 op/s
Jan 23 19:01:20 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "40099f4c-64b9-4793-91a5-8cec06f6370d_97cdfd79-b988-4626-a98d-a77e7f8f9c6d", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:40099f4c-64b9-4793-91a5-8cec06f6370d_97cdfd79-b988-4626-a98d-a77e7f8f9c6d, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp'
Jan 23 19:01:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp' to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta'
Jan 23 19:01:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:40099f4c-64b9-4793-91a5-8cec06f6370d_97cdfd79-b988-4626-a98d-a77e7f8f9c6d, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:20 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "40099f4c-64b9-4793-91a5-8cec06f6370d", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:40099f4c-64b9-4793-91a5-8cec06f6370d, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp'
Jan 23 19:01:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta.tmp' to config b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f/.meta'
Jan 23 19:01:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:40099f4c-64b9-4793-91a5-8cec06f6370d, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 43 KiB/s wr, 9 op/s
Jan 23 19:01:20 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "40099f4c-64b9-4793-91a5-8cec06f6370d_97cdfd79-b988-4626-a98d-a77e7f8f9c6d", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:20 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "snap_name": "40099f4c-64b9-4793-91a5-8cec06f6370d", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:20 compute-0 ceph-mon[75097]: pgmap v927: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 43 KiB/s wr, 9 op/s
Jan 23 19:01:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:01:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 23 19:01:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 23 19:01:21 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 23 19:01:21 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d1a416c3-7f64-49d4-b1be-a176db014aeb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:01:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d1a416c3-7f64-49d4-b1be-a176db014aeb, vol_name:cephfs) < ""
Jan 23 19:01:21 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d1a416c3-7f64-49d4-b1be-a176db014aeb/a55bf9e0-b519-442a-b27b-cc3f11223d87'.
Jan 23 19:01:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d1a416c3-7f64-49d4-b1be-a176db014aeb/.meta.tmp'
Jan 23 19:01:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d1a416c3-7f64-49d4-b1be-a176db014aeb/.meta.tmp' to config b'/volumes/_nogroup/d1a416c3-7f64-49d4-b1be-a176db014aeb/.meta'
Jan 23 19:01:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d1a416c3-7f64-49d4-b1be-a176db014aeb, vol_name:cephfs) < ""
Jan 23 19:01:21 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d1a416c3-7f64-49d4-b1be-a176db014aeb", "format": "json"}]: dispatch
Jan 23 19:01:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d1a416c3-7f64-49d4-b1be-a176db014aeb, vol_name:cephfs) < ""
Jan 23 19:01:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d1a416c3-7f64-49d4-b1be-a176db014aeb, vol_name:cephfs) < ""
Jan 23 19:01:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:01:21 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:01:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 23 19:01:22 compute-0 ceph-mon[75097]: osdmap e132: 3 total, 3 up, 3 in
Jan 23 19:01:22 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:01:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 23 19:01:22 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 23 19:01:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 651 B/s rd, 20 KiB/s wr, 5 op/s
Jan 23 19:01:23 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d1a416c3-7f64-49d4-b1be-a176db014aeb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:01:23 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d1a416c3-7f64-49d4-b1be-a176db014aeb", "format": "json"}]: dispatch
Jan 23 19:01:23 compute-0 ceph-mon[75097]: osdmap e133: 3 total, 3 up, 3 in
Jan 23 19:01:23 compute-0 ceph-mon[75097]: pgmap v930: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 651 B/s rd, 20 KiB/s wr, 5 op/s
Jan 23 19:01:24 compute-0 sudo[245523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:01:24 compute-0 sudo[245523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:01:24 compute-0 sudo[245523]: pam_unix(sudo:session): session closed for user root
Jan 23 19:01:24 compute-0 sudo[245548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 19:01:24 compute-0 sudo[245548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:01:24 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "format": "json"}]: dispatch
Jan 23 19:01:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:01:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:01:24 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:01:24.445+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f' of type subvolume
Jan 23 19:01:24 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f' of type subvolume
Jan 23 19:01:24 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f'' moved to trashcan
Jan 23 19:01:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:01:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f, vol_name:cephfs) < ""
Jan 23 19:01:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 35 KiB/s wr, 7 op/s
Jan 23 19:01:24 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "format": "json"}]: dispatch
Jan 23 19:01:24 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "813e89aa-6f7f-4c70-8d5a-de6aa64dcb0f", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:24 compute-0 ceph-mon[75097]: pgmap v931: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 35 KiB/s wr, 7 op/s
Jan 23 19:01:24 compute-0 sudo[245548]: pam_unix(sudo:session): session closed for user root
Jan 23 19:01:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:01:24 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:01:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 19:01:24 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:01:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 19:01:25 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:01:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 19:01:25 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:01:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 19:01:25 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:01:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:01:25 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:01:25 compute-0 sudo[245603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:01:25 compute-0 sudo[245603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:01:25 compute-0 sudo[245603]: pam_unix(sudo:session): session closed for user root
Jan 23 19:01:25 compute-0 sudo[245628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 19:01:25 compute-0 sudo[245628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:01:25 compute-0 podman[245665]: 2026-01-23 19:01:25.474793047 +0000 UTC m=+0.081728732 container create e1c0876d60822312536821b16fbdc74d73920e210e54d5aea9cd3417125961ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_maxwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:01:25 compute-0 podman[245665]: 2026-01-23 19:01:25.414332364 +0000 UTC m=+0.021268089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:01:25 compute-0 systemd[1]: Started libpod-conmon-e1c0876d60822312536821b16fbdc74d73920e210e54d5aea9cd3417125961ed.scope.
Jan 23 19:01:25 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:01:25 compute-0 podman[245665]: 2026-01-23 19:01:25.561637422 +0000 UTC m=+0.168573127 container init e1c0876d60822312536821b16fbdc74d73920e210e54d5aea9cd3417125961ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_maxwell, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 19:01:25 compute-0 podman[245665]: 2026-01-23 19:01:25.569001882 +0000 UTC m=+0.175937567 container start e1c0876d60822312536821b16fbdc74d73920e210e54d5aea9cd3417125961ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_maxwell, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 19:01:25 compute-0 podman[245665]: 2026-01-23 19:01:25.573156773 +0000 UTC m=+0.180092478 container attach e1c0876d60822312536821b16fbdc74d73920e210e54d5aea9cd3417125961ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Jan 23 19:01:25 compute-0 sharp_maxwell[245681]: 167 167
Jan 23 19:01:25 compute-0 systemd[1]: libpod-e1c0876d60822312536821b16fbdc74d73920e210e54d5aea9cd3417125961ed.scope: Deactivated successfully.
Jan 23 19:01:25 compute-0 conmon[245681]: conmon e1c0876d608223125368 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1c0876d60822312536821b16fbdc74d73920e210e54d5aea9cd3417125961ed.scope/container/memory.events
Jan 23 19:01:25 compute-0 podman[245665]: 2026-01-23 19:01:25.576744821 +0000 UTC m=+0.183680506 container died e1c0876d60822312536821b16fbdc74d73920e210e54d5aea9cd3417125961ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:01:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:01:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:01:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:01:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:01:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:01:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:01:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d15b6216357bc6f6b7e3935b576ed68e3ebec8a751bb725aaf3d635138f8406-merged.mount: Deactivated successfully.
Jan 23 19:01:25 compute-0 podman[245665]: 2026-01-23 19:01:25.841795287 +0000 UTC m=+0.448730972 container remove e1c0876d60822312536821b16fbdc74d73920e210e54d5aea9cd3417125961ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_maxwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:01:25 compute-0 systemd[1]: libpod-conmon-e1c0876d60822312536821b16fbdc74d73920e210e54d5aea9cd3417125961ed.scope: Deactivated successfully.
Jan 23 19:01:26 compute-0 podman[245705]: 2026-01-23 19:01:26.023590365 +0000 UTC m=+0.047754344 container create dce990bcb87ab91789e89d397db7174b8d019ca8b192edec1b5361701cf91b04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_feynman, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:01:26 compute-0 podman[245705]: 2026-01-23 19:01:26.001893066 +0000 UTC m=+0.026057075 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:01:26 compute-0 systemd[1]: Started libpod-conmon-dce990bcb87ab91789e89d397db7174b8d019ca8b192edec1b5361701cf91b04.scope.
Jan 23 19:01:26 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b8276e054bf3004b423f93858cebdf29f8e20e3ec6325eb01bd64b4beddef97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b8276e054bf3004b423f93858cebdf29f8e20e3ec6325eb01bd64b4beddef97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b8276e054bf3004b423f93858cebdf29f8e20e3ec6325eb01bd64b4beddef97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b8276e054bf3004b423f93858cebdf29f8e20e3ec6325eb01bd64b4beddef97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b8276e054bf3004b423f93858cebdf29f8e20e3ec6325eb01bd64b4beddef97/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 19:01:26 compute-0 podman[245705]: 2026-01-23 19:01:26.166810934 +0000 UTC m=+0.190974943 container init dce990bcb87ab91789e89d397db7174b8d019ca8b192edec1b5361701cf91b04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_feynman, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:01:26 compute-0 podman[245705]: 2026-01-23 19:01:26.173123188 +0000 UTC m=+0.197287167 container start dce990bcb87ab91789e89d397db7174b8d019ca8b192edec1b5361701cf91b04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_feynman, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 19:01:26 compute-0 podman[245705]: 2026-01-23 19:01:26.177010222 +0000 UTC m=+0.201174231 container attach dce990bcb87ab91789e89d397db7174b8d019ca8b192edec1b5361701cf91b04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_feynman, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 19:01:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:01:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 36 KiB/s wr, 7 op/s
Jan 23 19:01:26 compute-0 focused_feynman[245722]: --> passed data devices: 0 physical, 3 LVM
Jan 23 19:01:26 compute-0 focused_feynman[245722]: --> All data devices are unavailable
Jan 23 19:01:26 compute-0 systemd[1]: libpod-dce990bcb87ab91789e89d397db7174b8d019ca8b192edec1b5361701cf91b04.scope: Deactivated successfully.
Jan 23 19:01:26 compute-0 podman[245705]: 2026-01-23 19:01:26.696412495 +0000 UTC m=+0.720576524 container died dce990bcb87ab91789e89d397db7174b8d019ca8b192edec1b5361701cf91b04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_feynman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:01:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b8276e054bf3004b423f93858cebdf29f8e20e3ec6325eb01bd64b4beddef97-merged.mount: Deactivated successfully.
Jan 23 19:01:27 compute-0 ceph-mon[75097]: pgmap v932: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 36 KiB/s wr, 7 op/s
Jan 23 19:01:27 compute-0 podman[245705]: 2026-01-23 19:01:27.451626013 +0000 UTC m=+1.475789992 container remove dce990bcb87ab91789e89d397db7174b8d019ca8b192edec1b5361701cf91b04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:01:27 compute-0 systemd[1]: libpod-conmon-dce990bcb87ab91789e89d397db7174b8d019ca8b192edec1b5361701cf91b04.scope: Deactivated successfully.
Jan 23 19:01:27 compute-0 sudo[245628]: pam_unix(sudo:session): session closed for user root
Jan 23 19:01:27 compute-0 sudo[245754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:01:27 compute-0 sudo[245754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:01:27 compute-0 sudo[245754]: pam_unix(sudo:session): session closed for user root
Jan 23 19:01:27 compute-0 sudo[245779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 19:01:27 compute-0 sudo[245779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:01:27 compute-0 podman[245817]: 2026-01-23 19:01:27.99377217 +0000 UTC m=+0.079260533 container create d9b0b3bb29811c8df67f1edd302de8329051477684d069edcd950f282ca5a049 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:01:28 compute-0 systemd[1]: Started libpod-conmon-d9b0b3bb29811c8df67f1edd302de8329051477684d069edcd950f282ca5a049.scope.
Jan 23 19:01:28 compute-0 podman[245817]: 2026-01-23 19:01:27.937927459 +0000 UTC m=+0.023415822 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:01:28 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:01:28 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:01:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9, vol_name:cephfs) < ""
Jan 23 19:01:28 compute-0 podman[245817]: 2026-01-23 19:01:28.340585109 +0000 UTC m=+0.426073482 container init d9b0b3bb29811c8df67f1edd302de8329051477684d069edcd950f282ca5a049 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rubin, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:01:28 compute-0 podman[245817]: 2026-01-23 19:01:28.348278936 +0000 UTC m=+0.433767279 container start d9b0b3bb29811c8df67f1edd302de8329051477684d069edcd950f282ca5a049 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 19:01:28 compute-0 compassionate_rubin[245833]: 167 167
Jan 23 19:01:28 compute-0 systemd[1]: libpod-d9b0b3bb29811c8df67f1edd302de8329051477684d069edcd950f282ca5a049.scope: Deactivated successfully.
Jan 23 19:01:28 compute-0 podman[245817]: 2026-01-23 19:01:28.352863067 +0000 UTC m=+0.438351500 container attach d9b0b3bb29811c8df67f1edd302de8329051477684d069edcd950f282ca5a049 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rubin, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:01:28 compute-0 podman[245817]: 2026-01-23 19:01:28.353332489 +0000 UTC m=+0.438820832 container died d9b0b3bb29811c8df67f1edd302de8329051477684d069edcd950f282ca5a049 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:01:28 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9/49b99125-16c3-4f13-8156-4b095d27bbdb'.
Jan 23 19:01:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 20 KiB/s wr, 3 op/s
Jan 23 19:01:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9/.meta.tmp'
Jan 23 19:01:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9/.meta.tmp' to config b'/volumes/_nogroup/349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9/.meta'
Jan 23 19:01:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9, vol_name:cephfs) < ""
Jan 23 19:01:28 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9", "format": "json"}]: dispatch
Jan 23 19:01:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9, vol_name:cephfs) < ""
Jan 23 19:01:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9, vol_name:cephfs) < ""
Jan 23 19:01:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:01:28 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:01:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:01:28
Jan 23 19:01:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:01:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:01:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'volumes', 'default.rgw.log', 'vms', '.mgr', 'backups']
Jan 23 19:01:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:01:29 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:01:29 compute-0 ceph-mon[75097]: pgmap v933: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 20 KiB/s wr, 3 op/s
Jan 23 19:01:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-39583d575a63b64dde7df4edb85e749f0486d3ff5b08d641514a5d238003adc1-merged.mount: Deactivated successfully.
Jan 23 19:01:29 compute-0 podman[245817]: 2026-01-23 19:01:29.582086092 +0000 UTC m=+1.667574435 container remove d9b0b3bb29811c8df67f1edd302de8329051477684d069edcd950f282ca5a049 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:01:29 compute-0 systemd[1]: libpod-conmon-d9b0b3bb29811c8df67f1edd302de8329051477684d069edcd950f282ca5a049.scope: Deactivated successfully.
Jan 23 19:01:29 compute-0 podman[245834]: 2026-01-23 19:01:29.673443727 +0000 UTC m=+1.626331089 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:01:29 compute-0 podman[245881]: 2026-01-23 19:01:29.729845502 +0000 UTC m=+0.020698536 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:01:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:01:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:01:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:01:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:01:30 compute-0 podman[245881]: 2026-01-23 19:01:30.074751244 +0000 UTC m=+0.365604288 container create 0a0a13cd18389aa95ffa277df6910477824059a6f521bcfd0cea7c1d95802cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_antonelli, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 23 19:01:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:01:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:01:30 compute-0 systemd[1]: Started libpod-conmon-0a0a13cd18389aa95ffa277df6910477824059a6f521bcfd0cea7c1d95802cd8.scope.
Jan 23 19:01:30 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/083a69a7aff514a22896fa0182d270459b0ef229e5c226dac51af4b77c429608/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/083a69a7aff514a22896fa0182d270459b0ef229e5c226dac51af4b77c429608/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/083a69a7aff514a22896fa0182d270459b0ef229e5c226dac51af4b77c429608/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/083a69a7aff514a22896fa0182d270459b0ef229e5c226dac51af4b77c429608/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:01:30 compute-0 podman[245881]: 2026-01-23 19:01:30.268861912 +0000 UTC m=+0.559714936 container init 0a0a13cd18389aa95ffa277df6910477824059a6f521bcfd0cea7c1d95802cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_antonelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:01:30 compute-0 podman[245881]: 2026-01-23 19:01:30.277288438 +0000 UTC m=+0.568141452 container start 0a0a13cd18389aa95ffa277df6910477824059a6f521bcfd0cea7c1d95802cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_antonelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 19:01:30 compute-0 podman[245881]: 2026-01-23 19:01:30.367171277 +0000 UTC m=+0.658024481 container attach 0a0a13cd18389aa95ffa277df6910477824059a6f521bcfd0cea7c1d95802cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_antonelli, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:01:30 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9", "format": "json"}]: dispatch
Jan 23 19:01:30 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:01:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 440 B/s rd, 39 KiB/s wr, 6 op/s
Jan 23 19:01:30 compute-0 sad_antonelli[245897]: {
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:     "0": [
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:         {
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "devices": [
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "/dev/loop3"
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             ],
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "lv_name": "ceph_lv0",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "lv_size": "21470642176",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "name": "ceph_lv0",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "tags": {
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.cluster_name": "ceph",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.crush_device_class": "",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.encrypted": "0",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.objectstore": "bluestore",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.osd_id": "0",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.type": "block",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.vdo": "0",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.with_tpm": "0"
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             },
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "type": "block",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "vg_name": "ceph_vg0"
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:         }
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:     ],
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:     "1": [
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:         {
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "devices": [
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "/dev/loop4"
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             ],
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "lv_name": "ceph_lv1",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "lv_size": "21470642176",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "name": "ceph_lv1",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "tags": {
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.cluster_name": "ceph",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.crush_device_class": "",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.encrypted": "0",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.objectstore": "bluestore",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.osd_id": "1",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.type": "block",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.vdo": "0",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.with_tpm": "0"
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             },
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "type": "block",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "vg_name": "ceph_vg1"
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:         }
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:     ],
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:     "2": [
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:         {
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "devices": [
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "/dev/loop5"
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             ],
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "lv_name": "ceph_lv2",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "lv_size": "21470642176",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "name": "ceph_lv2",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "tags": {
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.cluster_name": "ceph",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.crush_device_class": "",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.encrypted": "0",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.objectstore": "bluestore",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.osd_id": "2",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.type": "block",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.vdo": "0",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:                 "ceph.with_tpm": "0"
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             },
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "type": "block",
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:             "vg_name": "ceph_vg2"
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:         }
Jan 23 19:01:30 compute-0 sad_antonelli[245897]:     ]
Jan 23 19:01:30 compute-0 sad_antonelli[245897]: }
Jan 23 19:01:30 compute-0 systemd[1]: libpod-0a0a13cd18389aa95ffa277df6910477824059a6f521bcfd0cea7c1d95802cd8.scope: Deactivated successfully.
Jan 23 19:01:30 compute-0 podman[245881]: 2026-01-23 19:01:30.569508045 +0000 UTC m=+0.860361089 container died 0a0a13cd18389aa95ffa277df6910477824059a6f521bcfd0cea7c1d95802cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 19:01:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-083a69a7aff514a22896fa0182d270459b0ef229e5c226dac51af4b77c429608-merged.mount: Deactivated successfully.
Jan 23 19:01:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:01:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:01:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:01:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:01:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:01:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:01:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:01:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:01:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:01:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:01:31 compute-0 podman[245881]: 2026-01-23 19:01:31.096404551 +0000 UTC m=+1.387257585 container remove 0a0a13cd18389aa95ffa277df6910477824059a6f521bcfd0cea7c1d95802cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 23 19:01:31 compute-0 systemd[1]: libpod-conmon-0a0a13cd18389aa95ffa277df6910477824059a6f521bcfd0cea7c1d95802cd8.scope: Deactivated successfully.
Jan 23 19:01:31 compute-0 sudo[245779]: pam_unix(sudo:session): session closed for user root
Jan 23 19:01:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:01:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 23 19:01:31 compute-0 sudo[245919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:01:31 compute-0 sudo[245919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:01:31 compute-0 sudo[245919]: pam_unix(sudo:session): session closed for user root
Jan 23 19:01:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 23 19:01:31 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 23 19:01:31 compute-0 sudo[245944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 19:01:31 compute-0 sudo[245944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:01:31 compute-0 ceph-mon[75097]: pgmap v934: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 440 B/s rd, 39 KiB/s wr, 6 op/s
Jan 23 19:01:31 compute-0 ceph-mon[75097]: osdmap e134: 3 total, 3 up, 3 in
Jan 23 19:01:31 compute-0 podman[245981]: 2026-01-23 19:01:31.564263128 +0000 UTC m=+0.024516658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:01:31 compute-0 podman[245981]: 2026-01-23 19:01:31.721264783 +0000 UTC m=+0.181518293 container create a7f1073b5f7a4a93e4d3aeac03fd8ea9b72da679d8f83b0800b35b1c0f173e3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 19:01:31 compute-0 systemd[1]: Started libpod-conmon-a7f1073b5f7a4a93e4d3aeac03fd8ea9b72da679d8f83b0800b35b1c0f173e3d.scope.
Jan 23 19:01:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:01:31 compute-0 podman[245981]: 2026-01-23 19:01:31.82336519 +0000 UTC m=+0.283618710 container init a7f1073b5f7a4a93e4d3aeac03fd8ea9b72da679d8f83b0800b35b1c0f173e3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 23 19:01:31 compute-0 podman[245981]: 2026-01-23 19:01:31.832843361 +0000 UTC m=+0.293096861 container start a7f1073b5f7a4a93e4d3aeac03fd8ea9b72da679d8f83b0800b35b1c0f173e3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_sammet, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:01:31 compute-0 podman[245981]: 2026-01-23 19:01:31.836947831 +0000 UTC m=+0.297201351 container attach a7f1073b5f7a4a93e4d3aeac03fd8ea9b72da679d8f83b0800b35b1c0f173e3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:01:31 compute-0 infallible_sammet[245998]: 167 167
Jan 23 19:01:31 compute-0 systemd[1]: libpod-a7f1073b5f7a4a93e4d3aeac03fd8ea9b72da679d8f83b0800b35b1c0f173e3d.scope: Deactivated successfully.
Jan 23 19:01:31 compute-0 podman[245981]: 2026-01-23 19:01:31.840710632 +0000 UTC m=+0.300964132 container died a7f1073b5f7a4a93e4d3aeac03fd8ea9b72da679d8f83b0800b35b1c0f173e3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 23 19:01:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0aa73b0b9156abca5b55df4f82a76a9d17bd86c4cacb97966df9a6cc7128c12-merged.mount: Deactivated successfully.
Jan 23 19:01:31 compute-0 podman[245981]: 2026-01-23 19:01:31.918820085 +0000 UTC m=+0.379073585 container remove a7f1073b5f7a4a93e4d3aeac03fd8ea9b72da679d8f83b0800b35b1c0f173e3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_sammet, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 19:01:31 compute-0 systemd[1]: libpod-conmon-a7f1073b5f7a4a93e4d3aeac03fd8ea9b72da679d8f83b0800b35b1c0f173e3d.scope: Deactivated successfully.
Jan 23 19:01:32 compute-0 podman[246024]: 2026-01-23 19:01:32.111759306 +0000 UTC m=+0.076771392 container create ab636c99766bb5058a70c7447a128c07a97c31dad1d42e48b48c62ff712094a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:01:32 compute-0 systemd[1]: Started libpod-conmon-ab636c99766bb5058a70c7447a128c07a97c31dad1d42e48b48c62ff712094a3.scope.
Jan 23 19:01:32 compute-0 podman[246024]: 2026-01-23 19:01:32.06475346 +0000 UTC m=+0.029765586 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:01:32 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0fba04b41e8ff9e6ce2d45c4ebaf3cb3651b151cf37dd9e26adc078e1124e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0fba04b41e8ff9e6ce2d45c4ebaf3cb3651b151cf37dd9e26adc078e1124e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0fba04b41e8ff9e6ce2d45c4ebaf3cb3651b151cf37dd9e26adc078e1124e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0fba04b41e8ff9e6ce2d45c4ebaf3cb3651b151cf37dd9e26adc078e1124e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:01:32 compute-0 podman[246024]: 2026-01-23 19:01:32.231529834 +0000 UTC m=+0.196541930 container init ab636c99766bb5058a70c7447a128c07a97c31dad1d42e48b48c62ff712094a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_blackwell, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 23 19:01:32 compute-0 podman[246024]: 2026-01-23 19:01:32.238729729 +0000 UTC m=+0.203741805 container start ab636c99766bb5058a70c7447a128c07a97c31dad1d42e48b48c62ff712094a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_blackwell, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:01:32 compute-0 podman[246024]: 2026-01-23 19:01:32.242941981 +0000 UTC m=+0.207954057 container attach ab636c99766bb5058a70c7447a128c07a97c31dad1d42e48b48c62ff712094a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 23 19:01:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 36 KiB/s wr, 5 op/s
Jan 23 19:01:32 compute-0 ceph-mon[75097]: pgmap v936: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 36 KiB/s wr, 5 op/s
Jan 23 19:01:32 compute-0 lvm[246120]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:01:32 compute-0 lvm[246119]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:01:32 compute-0 lvm[246119]: VG ceph_vg0 finished
Jan 23 19:01:32 compute-0 lvm[246120]: VG ceph_vg1 finished
Jan 23 19:01:32 compute-0 lvm[246122]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:01:32 compute-0 lvm[246122]: VG ceph_vg2 finished
Jan 23 19:01:33 compute-0 stupefied_blackwell[246040]: {}
Jan 23 19:01:33 compute-0 systemd[1]: libpod-ab636c99766bb5058a70c7447a128c07a97c31dad1d42e48b48c62ff712094a3.scope: Deactivated successfully.
Jan 23 19:01:33 compute-0 podman[246024]: 2026-01-23 19:01:33.065762685 +0000 UTC m=+1.030774761 container died ab636c99766bb5058a70c7447a128c07a97c31dad1d42e48b48c62ff712094a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_blackwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 23 19:01:33 compute-0 systemd[1]: libpod-ab636c99766bb5058a70c7447a128c07a97c31dad1d42e48b48c62ff712094a3.scope: Consumed 1.339s CPU time.
Jan 23 19:01:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef0fba04b41e8ff9e6ce2d45c4ebaf3cb3651b151cf37dd9e26adc078e1124e4-merged.mount: Deactivated successfully.
Jan 23 19:01:33 compute-0 podman[246024]: 2026-01-23 19:01:33.321856453 +0000 UTC m=+1.286868539 container remove ab636c99766bb5058a70c7447a128c07a97c31dad1d42e48b48c62ff712094a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 19:01:33 compute-0 systemd[1]: libpod-conmon-ab636c99766bb5058a70c7447a128c07a97c31dad1d42e48b48c62ff712094a3.scope: Deactivated successfully.
Jan 23 19:01:33 compute-0 sudo[245944]: pam_unix(sudo:session): session closed for user root
Jan 23 19:01:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:01:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:01:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:01:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:01:33 compute-0 sudo[246138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 19:01:33 compute-0 sudo[246138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:01:33 compute-0 sudo[246138]: pam_unix(sudo:session): session closed for user root
Jan 23 19:01:33 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9", "snap_name": "e321d9c3-2ef6-4719-b058-31b227661b87", "format": "json"}]: dispatch
Jan 23 19:01:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e321d9c3-2ef6-4719-b058-31b227661b87, sub_name:349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9, vol_name:cephfs) < ""
Jan 23 19:01:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e321d9c3-2ef6-4719-b058-31b227661b87, sub_name:349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9, vol_name:cephfs) < ""
Jan 23 19:01:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:01:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:01:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 29 KiB/s wr, 4 op/s
Jan 23 19:01:35 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9", "snap_name": "e321d9c3-2ef6-4719-b058-31b227661b87", "format": "json"}]: dispatch
Jan 23 19:01:35 compute-0 ceph-mon[75097]: pgmap v937: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 29 KiB/s wr, 4 op/s
Jan 23 19:01:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:01:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 29 KiB/s wr, 4 op/s
Jan 23 19:01:36 compute-0 ceph-mon[75097]: pgmap v938: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 29 KiB/s wr, 4 op/s
Jan 23 19:01:37 compute-0 podman[246163]: 2026-01-23 19:01:37.998002896 +0000 UTC m=+0.072893596 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 23 19:01:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 29 KiB/s wr, 4 op/s
Jan 23 19:01:38 compute-0 ceph-mon[75097]: pgmap v939: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 29 KiB/s wr, 4 op/s
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659025480530337 of space, bias 1.0, pg target 0.19977076441591013 quantized to 32 (current 32)
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 3.353247009404525e-05 of space, bias 4.0, pg target 0.0402389641128543 quantized to 16 (current 16)
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:01:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 1 op/s
Jan 23 19:01:40 compute-0 ceph-mon[75097]: pgmap v940: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 1 op/s
Jan 23 19:01:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:01:41.277689) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194901277727, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 986, "num_deletes": 261, "total_data_size": 1207154, "memory_usage": 1226912, "flush_reason": "Manual Compaction"}
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194901287620, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1193652, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18834, "largest_seqno": 19819, "table_properties": {"data_size": 1188734, "index_size": 2378, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11109, "raw_average_key_size": 19, "raw_value_size": 1178576, "raw_average_value_size": 2082, "num_data_blocks": 106, "num_entries": 566, "num_filter_entries": 566, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769194836, "oldest_key_time": 1769194836, "file_creation_time": 1769194901, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 9996 microseconds, and 3603 cpu microseconds.
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:01:41.287677) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1193652 bytes OK
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:01:41.287703) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:01:41.289804) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:01:41.289849) EVENT_LOG_v1 {"time_micros": 1769194901289840, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:01:41.289873) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1202283, prev total WAL file size 1202283, number of live WAL files 2.
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:01:41.290460) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1165KB)], [44(6380KB)]
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194901290506, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7727510, "oldest_snapshot_seqno": -1}
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4337 keys, 7604164 bytes, temperature: kUnknown
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194901355652, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7604164, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7574338, "index_size": 17876, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 107605, "raw_average_key_size": 24, "raw_value_size": 7495045, "raw_average_value_size": 1728, "num_data_blocks": 748, "num_entries": 4337, "num_filter_entries": 4337, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 1769194901, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:01:41.355879) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7604164 bytes
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:01:41.358169) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 118.5 rd, 116.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 6.2 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(12.8) write-amplify(6.4) OK, records in: 4870, records dropped: 533 output_compression: NoCompression
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:01:41.358187) EVENT_LOG_v1 {"time_micros": 1769194901358178, "job": 22, "event": "compaction_finished", "compaction_time_micros": 65216, "compaction_time_cpu_micros": 17512, "output_level": 6, "num_output_files": 1, "total_output_size": 7604164, "num_input_records": 4870, "num_output_records": 4337, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194901358791, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769194901360360, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:01:41.290361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:01:41.360475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:01:41.360483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:01:41.360486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:01:41.360489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:01:41 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:01:41.360493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:01:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 1 op/s
Jan 23 19:01:42 compute-0 ceph-mon[75097]: pgmap v941: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 1 op/s
Jan 23 19:01:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 1 op/s
Jan 23 19:01:44 compute-0 ceph-mon[75097]: pgmap v942: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 1 op/s
Jan 23 19:01:44 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "27d42cf3-c581-4351-85ac-a91e5a37a2c7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:01:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:27d42cf3-c581-4351-85ac-a91e5a37a2c7, vol_name:cephfs) < ""
Jan 23 19:01:44 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/27d42cf3-c581-4351-85ac-a91e5a37a2c7/c1ace486-d1b1-40ea-947d-f01d12491da4'.
Jan 23 19:01:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/27d42cf3-c581-4351-85ac-a91e5a37a2c7/.meta.tmp'
Jan 23 19:01:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/27d42cf3-c581-4351-85ac-a91e5a37a2c7/.meta.tmp' to config b'/volumes/_nogroup/27d42cf3-c581-4351-85ac-a91e5a37a2c7/.meta'
Jan 23 19:01:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:27d42cf3-c581-4351-85ac-a91e5a37a2c7, vol_name:cephfs) < ""
Jan 23 19:01:44 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "27d42cf3-c581-4351-85ac-a91e5a37a2c7", "format": "json"}]: dispatch
Jan 23 19:01:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:27d42cf3-c581-4351-85ac-a91e5a37a2c7, vol_name:cephfs) < ""
Jan 23 19:01:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:27d42cf3-c581-4351-85ac-a91e5a37a2c7, vol_name:cephfs) < ""
Jan 23 19:01:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:01:44 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:01:45 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "27d42cf3-c581-4351-85ac-a91e5a37a2c7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:01:45 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "27d42cf3-c581-4351-85ac-a91e5a37a2c7", "format": "json"}]: dispatch
Jan 23 19:01:45 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:01:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:01:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s wr, 0 op/s
Jan 23 19:01:46 compute-0 ceph-mon[75097]: pgmap v943: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s wr, 0 op/s
Jan 23 19:01:48 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1291e2ec-1b23-4741-b658-cb06b2d77e76", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:01:48 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1291e2ec-1b23-4741-b658-cb06b2d77e76, vol_name:cephfs) < ""
Jan 23 19:01:48 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1291e2ec-1b23-4741-b658-cb06b2d77e76/ffba04e2-08cf-47b9-bd47-0043de8e96c3'.
Jan 23 19:01:48 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1291e2ec-1b23-4741-b658-cb06b2d77e76/.meta.tmp'
Jan 23 19:01:48 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1291e2ec-1b23-4741-b658-cb06b2d77e76/.meta.tmp' to config b'/volumes/_nogroup/1291e2ec-1b23-4741-b658-cb06b2d77e76/.meta'
Jan 23 19:01:48 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1291e2ec-1b23-4741-b658-cb06b2d77e76, vol_name:cephfs) < ""
Jan 23 19:01:48 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1291e2ec-1b23-4741-b658-cb06b2d77e76", "format": "json"}]: dispatch
Jan 23 19:01:48 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1291e2ec-1b23-4741-b658-cb06b2d77e76, vol_name:cephfs) < ""
Jan 23 19:01:48 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1291e2ec-1b23-4741-b658-cb06b2d77e76, vol_name:cephfs) < ""
Jan 23 19:01:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:01:48 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:01:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s wr, 0 op/s
Jan 23 19:01:49 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:01:49 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9", "snap_name": "e321d9c3-2ef6-4719-b058-31b227661b87_3b316d82-4a97-4373-b0b5-54ea536d2817", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e321d9c3-2ef6-4719-b058-31b227661b87_3b316d82-4a97-4373-b0b5-54ea536d2817, sub_name:349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9, vol_name:cephfs) < ""
Jan 23 19:01:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9/.meta.tmp'
Jan 23 19:01:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9/.meta.tmp' to config b'/volumes/_nogroup/349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9/.meta'
Jan 23 19:01:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e321d9c3-2ef6-4719-b058-31b227661b87_3b316d82-4a97-4373-b0b5-54ea536d2817, sub_name:349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9, vol_name:cephfs) < ""
Jan 23 19:01:49 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9", "snap_name": "e321d9c3-2ef6-4719-b058-31b227661b87", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e321d9c3-2ef6-4719-b058-31b227661b87, sub_name:349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9, vol_name:cephfs) < ""
Jan 23 19:01:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9/.meta.tmp'
Jan 23 19:01:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9/.meta.tmp' to config b'/volumes/_nogroup/349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9/.meta'
Jan 23 19:01:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e321d9c3-2ef6-4719-b058-31b227661b87, sub_name:349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9, vol_name:cephfs) < ""
Jan 23 19:01:50 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1291e2ec-1b23-4741-b658-cb06b2d77e76", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:01:50 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1291e2ec-1b23-4741-b658-cb06b2d77e76", "format": "json"}]: dispatch
Jan 23 19:01:50 compute-0 ceph-mon[75097]: pgmap v944: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s wr, 0 op/s
Jan 23 19:01:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s wr, 2 op/s
Jan 23 19:01:51 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9", "snap_name": "e321d9c3-2ef6-4719-b058-31b227661b87_3b316d82-4a97-4373-b0b5-54ea536d2817", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:51 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9", "snap_name": "e321d9c3-2ef6-4719-b058-31b227661b87", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:51 compute-0 ceph-mon[75097]: pgmap v945: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s wr, 2 op/s
Jan 23 19:01:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:01:51 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:01:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7e746fb3-c8b1-4b98-8d5d-0856a5904d96, vol_name:cephfs) < ""
Jan 23 19:01:51 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/7e746fb3-c8b1-4b98-8d5d-0856a5904d96/f32de13e-2f8a-40aa-a81e-50c3527b0601'.
Jan 23 19:01:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7e746fb3-c8b1-4b98-8d5d-0856a5904d96/.meta.tmp'
Jan 23 19:01:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7e746fb3-c8b1-4b98-8d5d-0856a5904d96/.meta.tmp' to config b'/volumes/_nogroup/7e746fb3-c8b1-4b98-8d5d-0856a5904d96/.meta'
Jan 23 19:01:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7e746fb3-c8b1-4b98-8d5d-0856a5904d96, vol_name:cephfs) < ""
Jan 23 19:01:51 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "format": "json"}]: dispatch
Jan 23 19:01:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7e746fb3-c8b1-4b98-8d5d-0856a5904d96, vol_name:cephfs) < ""
Jan 23 19:01:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7e746fb3-c8b1-4b98-8d5d-0856a5904d96, vol_name:cephfs) < ""
Jan 23 19:01:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:01:51 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:01:51 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0b2a4f38-d054-4cd4-98b9-e0a2222a0f26", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:01:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0b2a4f38-d054-4cd4-98b9-e0a2222a0f26, vol_name:cephfs) < ""
Jan 23 19:01:51 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/0b2a4f38-d054-4cd4-98b9-e0a2222a0f26/70fda8a1-fc9d-4241-920f-230513aae6ff'.
Jan 23 19:01:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0b2a4f38-d054-4cd4-98b9-e0a2222a0f26/.meta.tmp'
Jan 23 19:01:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0b2a4f38-d054-4cd4-98b9-e0a2222a0f26/.meta.tmp' to config b'/volumes/_nogroup/0b2a4f38-d054-4cd4-98b9-e0a2222a0f26/.meta'
Jan 23 19:01:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0b2a4f38-d054-4cd4-98b9-e0a2222a0f26, vol_name:cephfs) < ""
Jan 23 19:01:51 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0b2a4f38-d054-4cd4-98b9-e0a2222a0f26", "format": "json"}]: dispatch
Jan 23 19:01:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0b2a4f38-d054-4cd4-98b9-e0a2222a0f26, vol_name:cephfs) < ""
Jan 23 19:01:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0b2a4f38-d054-4cd4-98b9-e0a2222a0f26, vol_name:cephfs) < ""
Jan 23 19:01:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:01:51 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:01:52 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:01:52 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "format": "json"}]: dispatch
Jan 23 19:01:52 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:01:52 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:01:52 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1291e2ec-1b23-4741-b658-cb06b2d77e76", "auth_id": "Joe", "tenant_id": "e19ff7e178044a3f9516cdc773fa4d6b", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:01:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:1291e2ec-1b23-4741-b658-cb06b2d77e76, tenant_id:e19ff7e178044a3f9516cdc773fa4d6b, vol_name:cephfs) < ""
Jan 23 19:01:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Jan 23 19:01:52 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Jan 23 19:01:52 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID Joe with tenant e19ff7e178044a3f9516cdc773fa4d6b
Jan 23 19:01:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/1291e2ec-1b23-4741-b658-cb06b2d77e76/ffba04e2-08cf-47b9-bd47-0043de8e96c3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1291e2ec-1b23-4741-b658-cb06b2d77e76", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:01:52 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/1291e2ec-1b23-4741-b658-cb06b2d77e76/ffba04e2-08cf-47b9-bd47-0043de8e96c3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1291e2ec-1b23-4741-b658-cb06b2d77e76", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:01:52 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/1291e2ec-1b23-4741-b658-cb06b2d77e76/ffba04e2-08cf-47b9-bd47-0043de8e96c3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1291e2ec-1b23-4741-b658-cb06b2d77e76", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:01:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:1291e2ec-1b23-4741-b658-cb06b2d77e76, tenant_id:e19ff7e178044a3f9516cdc773fa4d6b, vol_name:cephfs) < ""
Jan 23 19:01:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Jan 23 19:01:53 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0b2a4f38-d054-4cd4-98b9-e0a2222a0f26", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:01:53 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0b2a4f38-d054-4cd4-98b9-e0a2222a0f26", "format": "json"}]: dispatch
Jan 23 19:01:53 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1291e2ec-1b23-4741-b658-cb06b2d77e76", "auth_id": "Joe", "tenant_id": "e19ff7e178044a3f9516cdc773fa4d6b", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:01:53 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Jan 23 19:01:53 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/1291e2ec-1b23-4741-b658-cb06b2d77e76/ffba04e2-08cf-47b9-bd47-0043de8e96c3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1291e2ec-1b23-4741-b658-cb06b2d77e76", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:01:53 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/1291e2ec-1b23-4741-b658-cb06b2d77e76/ffba04e2-08cf-47b9-bd47-0043de8e96c3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1291e2ec-1b23-4741-b658-cb06b2d77e76", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:01:53 compute-0 ceph-mon[75097]: pgmap v946: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Jan 23 19:01:53 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9", "format": "json"}]: dispatch
Jan 23 19:01:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:01:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:01:53 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:01:53.439+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9' of type subvolume
Jan 23 19:01:53 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9' of type subvolume
Jan 23 19:01:53 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9, vol_name:cephfs) < ""
Jan 23 19:01:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9'' moved to trashcan
Jan 23 19:01:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:01:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9, vol_name:cephfs) < ""
Jan 23 19:01:54 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9", "format": "json"}]: dispatch
Jan 23 19:01:54 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "349e50d8-8b7b-4b2a-9bbb-82e4a22fd8a9", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 52 KiB/s wr, 6 op/s
Jan 23 19:01:54 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "auth_id": "tempest-cephx-id-1573583560", "tenant_id": "bd0d4498326a45708c5d8845efe3901e", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:01:54 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1573583560, format:json, prefix:fs subvolume authorize, sub_name:7e746fb3-c8b1-4b98-8d5d-0856a5904d96, tenant_id:bd0d4498326a45708c5d8845efe3901e, vol_name:cephfs) < ""
Jan 23 19:01:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1573583560", "format": "json"} v 0)
Jan 23 19:01:54 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1573583560", "format": "json"} : dispatch
Jan 23 19:01:54 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID tempest-cephx-id-1573583560 with tenant bd0d4498326a45708c5d8845efe3901e
Jan 23 19:01:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1573583560", "caps": ["mds", "allow rw path=/volumes/_nogroup/7e746fb3-c8b1-4b98-8d5d-0856a5904d96/f32de13e-2f8a-40aa-a81e-50c3527b0601", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:01:54 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1573583560", "caps": ["mds", "allow rw path=/volumes/_nogroup/7e746fb3-c8b1-4b98-8d5d-0856a5904d96/f32de13e-2f8a-40aa-a81e-50c3527b0601", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:01:54 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1573583560", "caps": ["mds", "allow rw path=/volumes/_nogroup/7e746fb3-c8b1-4b98-8d5d-0856a5904d96/f32de13e-2f8a-40aa-a81e-50c3527b0601", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:01:54 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1573583560, format:json, prefix:fs subvolume authorize, sub_name:7e746fb3-c8b1-4b98-8d5d-0856a5904d96, tenant_id:bd0d4498326a45708c5d8845efe3901e, vol_name:cephfs) < ""
Jan 23 19:01:55 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0b2a4f38-d054-4cd4-98b9-e0a2222a0f26", "format": "json"}]: dispatch
Jan 23 19:01:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0b2a4f38-d054-4cd4-98b9-e0a2222a0f26, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:01:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0b2a4f38-d054-4cd4-98b9-e0a2222a0f26, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:01:55 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:01:55.261+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0b2a4f38-d054-4cd4-98b9-e0a2222a0f26' of type subvolume
Jan 23 19:01:55 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0b2a4f38-d054-4cd4-98b9-e0a2222a0f26' of type subvolume
Jan 23 19:01:55 compute-0 ceph-mon[75097]: pgmap v947: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 52 KiB/s wr, 6 op/s
Jan 23 19:01:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1573583560", "format": "json"} : dispatch
Jan 23 19:01:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1573583560", "caps": ["mds", "allow rw path=/volumes/_nogroup/7e746fb3-c8b1-4b98-8d5d-0856a5904d96/f32de13e-2f8a-40aa-a81e-50c3527b0601", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:01:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1573583560", "caps": ["mds", "allow rw path=/volumes/_nogroup/7e746fb3-c8b1-4b98-8d5d-0856a5904d96/f32de13e-2f8a-40aa-a81e-50c3527b0601", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:01:55 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0b2a4f38-d054-4cd4-98b9-e0a2222a0f26", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0b2a4f38-d054-4cd4-98b9-e0a2222a0f26, vol_name:cephfs) < ""
Jan 23 19:01:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0b2a4f38-d054-4cd4-98b9-e0a2222a0f26'' moved to trashcan
Jan 23 19:01:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:01:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0b2a4f38-d054-4cd4-98b9-e0a2222a0f26, vol_name:cephfs) < ""
Jan 23 19:01:55 compute-0 nova_compute[240119]: 2026-01-23 19:01:55.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:01:55 compute-0 nova_compute[240119]: 2026-01-23 19:01:55.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:01:55 compute-0 nova_compute[240119]: 2026-01-23 19:01:55.377 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:01:55 compute-0 nova_compute[240119]: 2026-01-23 19:01:55.377 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:01:55 compute-0 nova_compute[240119]: 2026-01-23 19:01:55.377 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:01:55 compute-0 nova_compute[240119]: 2026-01-23 19:01:55.377 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:01:55 compute-0 nova_compute[240119]: 2026-01-23 19:01:55.378 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:01:55 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:01:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, vol_name:cephfs) < ""
Jan 23 19:01:55 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/957eb80d-b96f-4546-9b33-5d50aefacdfe/229e3528-6bd5-404f-9247-9cadfdbd82bc'.
Jan 23 19:01:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/957eb80d-b96f-4546-9b33-5d50aefacdfe/.meta.tmp'
Jan 23 19:01:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/957eb80d-b96f-4546-9b33-5d50aefacdfe/.meta.tmp' to config b'/volumes/_nogroup/957eb80d-b96f-4546-9b33-5d50aefacdfe/.meta'
Jan 23 19:01:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, vol_name:cephfs) < ""
Jan 23 19:01:55 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "format": "json"}]: dispatch
Jan 23 19:01:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, vol_name:cephfs) < ""
Jan 23 19:01:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, vol_name:cephfs) < ""
Jan 23 19:01:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:01:55 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:01:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:01:55 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/930269723' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:01:55 compute-0 nova_compute[240119]: 2026-01-23 19:01:55.943 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "auth_id": "tempest-cephx-id-1573583560", "format": "json"}]: dispatch
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1573583560, format:json, prefix:fs subvolume deauthorize, sub_name:7e746fb3-c8b1-4b98-8d5d-0856a5904d96, vol_name:cephfs) < ""
Jan 23 19:01:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1573583560", "format": "json"} v 0)
Jan 23 19:01:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1573583560", "format": "json"} : dispatch
Jan 23 19:01:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1573583560"} v 0)
Jan 23 19:01:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1573583560"} : dispatch
Jan 23 19:01:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1573583560"}]': finished
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1573583560, format:json, prefix:fs subvolume deauthorize, sub_name:7e746fb3-c8b1-4b98-8d5d-0856a5904d96, vol_name:cephfs) < ""
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "auth_id": "tempest-cephx-id-1573583560", "format": "json"}]: dispatch
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1573583560, format:json, prefix:fs subvolume evict, sub_name:7e746fb3-c8b1-4b98-8d5d-0856a5904d96, vol_name:cephfs) < ""
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1573583560, client_metadata.root=/volumes/_nogroup/7e746fb3-c8b1-4b98-8d5d-0856a5904d96/f32de13e-2f8a-40aa-a81e-50c3527b0601
Jan 23 19:01:56 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=tempest-cephx-id-1573583560,client_metadata.root=/volumes/_nogroup/7e746fb3-c8b1-4b98-8d5d-0856a5904d96/f32de13e-2f8a-40aa-a81e-50c3527b0601],prefix=session evict} (starting...)
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1573583560, format:json, prefix:fs subvolume evict, sub_name:7e746fb3-c8b1-4b98-8d5d-0856a5904d96, vol_name:cephfs) < ""
Jan 23 19:01:56 compute-0 nova_compute[240119]: 2026-01-23 19:01:56.128 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:01:56 compute-0 nova_compute[240119]: 2026-01-23 19:01:56.129 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5115MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:01:56 compute-0 nova_compute[240119]: 2026-01-23 19:01:56.129 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:01:56 compute-0 nova_compute[240119]: 2026-01-23 19:01:56.130 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:01:56 compute-0 nova_compute[240119]: 2026-01-23 19:01:56.204 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:01:56 compute-0 nova_compute[240119]: 2026-01-23 19:01:56.204 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "format": "json"}]: dispatch
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7e746fb3-c8b1-4b98-8d5d-0856a5904d96, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7e746fb3-c8b1-4b98-8d5d-0856a5904d96, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:01:56 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:01:56.216+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7e746fb3-c8b1-4b98-8d5d-0856a5904d96' of type subvolume
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7e746fb3-c8b1-4b98-8d5d-0856a5904d96' of type subvolume
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7e746fb3-c8b1-4b98-8d5d-0856a5904d96, vol_name:cephfs) < ""
Jan 23 19:01:56 compute-0 nova_compute[240119]: 2026-01-23 19:01:56.221 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7e746fb3-c8b1-4b98-8d5d-0856a5904d96'' moved to trashcan
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7e746fb3-c8b1-4b98-8d5d-0856a5904d96, vol_name:cephfs) < ""
Jan 23 19:01:56 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "auth_id": "tempest-cephx-id-1573583560", "tenant_id": "bd0d4498326a45708c5d8845efe3901e", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:01:56 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0b2a4f38-d054-4cd4-98b9-e0a2222a0f26", "format": "json"}]: dispatch
Jan 23 19:01:56 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0b2a4f38-d054-4cd4-98b9-e0a2222a0f26", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:56 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:01:56 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "format": "json"}]: dispatch
Jan 23 19:01:56 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:01:56 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/930269723' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:01:56 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1573583560", "format": "json"} : dispatch
Jan 23 19:01:56 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1573583560"} : dispatch
Jan 23 19:01:56 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1573583560"}]': finished
Jan 23 19:01:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 49 KiB/s wr, 6 op/s
Jan 23 19:01:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:01:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3476621829' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:01:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:01:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/56729809' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:01:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:01:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/56729809' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:01:56 compute-0 nova_compute[240119]: 2026-01-23 19:01:56.781 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:01:56 compute-0 nova_compute[240119]: 2026-01-23 19:01:56.786 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:01:56 compute-0 nova_compute[240119]: 2026-01-23 19:01:56.800 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:01:56 compute-0 nova_compute[240119]: 2026-01-23 19:01:56.802 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:01:56 compute-0 nova_compute[240119]: 2026-01-23 19:01:56.802 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d1a416c3-7f64-49d4-b1be-a176db014aeb", "format": "json"}]: dispatch
Jan 23 19:01:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d1a416c3-7f64-49d4-b1be-a176db014aeb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:01:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d1a416c3-7f64-49d4-b1be-a176db014aeb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:01:57 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:01:57.002+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd1a416c3-7f64-49d4-b1be-a176db014aeb' of type subvolume
Jan 23 19:01:57 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd1a416c3-7f64-49d4-b1be-a176db014aeb' of type subvolume
Jan 23 19:01:57 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d1a416c3-7f64-49d4-b1be-a176db014aeb", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d1a416c3-7f64-49d4-b1be-a176db014aeb, vol_name:cephfs) < ""
Jan 23 19:01:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d1a416c3-7f64-49d4-b1be-a176db014aeb'' moved to trashcan
Jan 23 19:01:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:01:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d1a416c3-7f64-49d4-b1be-a176db014aeb, vol_name:cephfs) < ""
Jan 23 19:01:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 23 19:01:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 23 19:01:57 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 23 19:01:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "auth_id": "tempest-cephx-id-1573583560", "format": "json"}]: dispatch
Jan 23 19:01:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "auth_id": "tempest-cephx-id-1573583560", "format": "json"}]: dispatch
Jan 23 19:01:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "format": "json"}]: dispatch
Jan 23 19:01:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7e746fb3-c8b1-4b98-8d5d-0856a5904d96", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:57 compute-0 ceph-mon[75097]: pgmap v948: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 49 KiB/s wr, 6 op/s
Jan 23 19:01:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3476621829' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:01:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/56729809' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:01:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/56729809' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:01:57 compute-0 ceph-mon[75097]: osdmap e135: 3 total, 3 up, 3 in
Jan 23 19:01:58 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d1a416c3-7f64-49d4-b1be-a176db014aeb", "format": "json"}]: dispatch
Jan 23 19:01:58 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d1a416c3-7f64-49d4-b1be-a176db014aeb", "force": true, "format": "json"}]: dispatch
Jan 23 19:01:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 59 KiB/s wr, 8 op/s
Jan 23 19:01:59 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "auth_id": "Joe", "tenant_id": "db171ed11cd54288960e2675bc51a3a1", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:01:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, tenant_id:db171ed11cd54288960e2675bc51a3a1, vol_name:cephfs) < ""
Jan 23 19:01:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Jan 23 19:01:59 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Jan 23 19:01:59 compute-0 ceph-mgr[75390]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use
Jan 23 19:01:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, tenant_id:db171ed11cd54288960e2675bc51a3a1, vol_name:cephfs) < ""
Jan 23 19:01:59 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:01:59.063+0000 7f06a7d20640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Jan 23 19:01:59 compute-0 ceph-mgr[75390]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Jan 23 19:01:59 compute-0 ceph-mon[75097]: pgmap v950: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 59 KiB/s wr, 8 op/s
Jan 23 19:01:59 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Jan 23 19:01:59 compute-0 nova_compute[240119]: 2026-01-23 19:01:59.801 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:01:59 compute-0 nova_compute[240119]: 2026-01-23 19:01:59.817 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:01:59 compute-0 nova_compute[240119]: 2026-01-23 19:01:59.817 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:02:00 compute-0 podman[246228]: 2026-01-23 19:02:00.025521652 +0000 UTC m=+0.099102774 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 23 19:02:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:02:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:02:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:02:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:02:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:02:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:02:00 compute-0 nova_compute[240119]: 2026-01-23 19:02:00.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:02:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 100 KiB/s wr, 14 op/s
Jan 23 19:02:00 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "auth_id": "Joe", "tenant_id": "db171ed11cd54288960e2675bc51a3a1", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:02:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:02:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 23 19:02:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 23 19:02:01 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 23 19:02:01 compute-0 nova_compute[240119]: 2026-01-23 19:02:01.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:02:01 compute-0 nova_compute[240119]: 2026-01-23 19:02:01.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:02:01 compute-0 nova_compute[240119]: 2026-01-23 19:02:01.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:02:01 compute-0 nova_compute[240119]: 2026-01-23 19:02:01.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:02:01 compute-0 ceph-mon[75097]: pgmap v951: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 100 KiB/s wr, 14 op/s
Jan 23 19:02:01 compute-0 ceph-mon[75097]: osdmap e136: 3 total, 3 up, 3 in
Jan 23 19:02:02 compute-0 nova_compute[240119]: 2026-01-23 19:02:02.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:02:02 compute-0 nova_compute[240119]: 2026-01-23 19:02:02.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:02:02 compute-0 nova_compute[240119]: 2026-01-23 19:02:02.350 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:02:02 compute-0 nova_compute[240119]: 2026-01-23 19:02:02.362 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:02:02 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "auth_id": "tempest-cephx-id-1425431123", "tenant_id": "db171ed11cd54288960e2675bc51a3a1", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:02:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1425431123, format:json, prefix:fs subvolume authorize, sub_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, tenant_id:db171ed11cd54288960e2675bc51a3a1, vol_name:cephfs) < ""
Jan 23 19:02:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 66 KiB/s wr, 11 op/s
Jan 23 19:02:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1425431123", "format": "json"} v 0)
Jan 23 19:02:02 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1425431123", "format": "json"} : dispatch
Jan 23 19:02:02 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID tempest-cephx-id-1425431123 with tenant db171ed11cd54288960e2675bc51a3a1
Jan 23 19:02:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1425431123", "caps": ["mds", "allow rw path=/volumes/_nogroup/957eb80d-b96f-4546-9b33-5d50aefacdfe/229e3528-6bd5-404f-9247-9cadfdbd82bc", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_957eb80d-b96f-4546-9b33-5d50aefacdfe", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:02:02 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1425431123", "caps": ["mds", "allow rw path=/volumes/_nogroup/957eb80d-b96f-4546-9b33-5d50aefacdfe/229e3528-6bd5-404f-9247-9cadfdbd82bc", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_957eb80d-b96f-4546-9b33-5d50aefacdfe", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:02:02 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1425431123", "caps": ["mds", "allow rw path=/volumes/_nogroup/957eb80d-b96f-4546-9b33-5d50aefacdfe/229e3528-6bd5-404f-9247-9cadfdbd82bc", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_957eb80d-b96f-4546-9b33-5d50aefacdfe", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:02:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1425431123, format:json, prefix:fs subvolume authorize, sub_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, tenant_id:db171ed11cd54288960e2675bc51a3a1, vol_name:cephfs) < ""
Jan 23 19:02:02 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "auth_id": "tempest-cephx-id-1425431123", "tenant_id": "db171ed11cd54288960e2675bc51a3a1", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:02:02 compute-0 ceph-mon[75097]: pgmap v953: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 66 KiB/s wr, 11 op/s
Jan 23 19:02:02 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1425431123", "format": "json"} : dispatch
Jan 23 19:02:02 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1425431123", "caps": ["mds", "allow rw path=/volumes/_nogroup/957eb80d-b96f-4546-9b33-5d50aefacdfe/229e3528-6bd5-404f-9247-9cadfdbd82bc", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_957eb80d-b96f-4546-9b33-5d50aefacdfe", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:02:02 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1425431123", "caps": ["mds", "allow rw path=/volumes/_nogroup/957eb80d-b96f-4546-9b33-5d50aefacdfe/229e3528-6bd5-404f-9247-9cadfdbd82bc", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_957eb80d-b96f-4546-9b33-5d50aefacdfe", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:02:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 95 KiB/s wr, 14 op/s
Jan 23 19:02:04 compute-0 ceph-mon[75097]: pgmap v954: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 95 KiB/s wr, 14 op/s
Jan 23 19:02:06 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 23 19:02:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, vol_name:cephfs) < ""
Jan 23 19:02:06 compute-0 ceph-mgr[75390]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume '957eb80d-b96f-4546-9b33-5d50aefacdfe'
Jan 23 19:02:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, vol_name:cephfs) < ""
Jan 23 19:02:06 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 23 19:02:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, vol_name:cephfs) < ""
Jan 23 19:02:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/957eb80d-b96f-4546-9b33-5d50aefacdfe/229e3528-6bd5-404f-9247-9cadfdbd82bc
Jan 23 19:02:06 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/957eb80d-b96f-4546-9b33-5d50aefacdfe/229e3528-6bd5-404f-9247-9cadfdbd82bc],prefix=session evict} (starting...)
Jan 23 19:02:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:02:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, vol_name:cephfs) < ""
Jan 23 19:02:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:02:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 868 B/s rd, 81 KiB/s wr, 11 op/s
Jan 23 19:02:06 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 23 19:02:06 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 23 19:02:06 compute-0 ceph-mon[75097]: pgmap v955: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 868 B/s rd, 81 KiB/s wr, 11 op/s
Jan 23 19:02:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 76 KiB/s wr, 11 op/s
Jan 23 19:02:08 compute-0 ceph-mon[75097]: pgmap v956: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 76 KiB/s wr, 11 op/s
Jan 23 19:02:08 compute-0 podman[246256]: 2026-01-23 19:02:08.978207481 +0000 UTC m=+0.043241943 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 23 19:02:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s wr, 3 op/s
Jan 23 19:02:10 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "auth_id": "tempest-cephx-id-1425431123", "format": "json"}]: dispatch
Jan 23 19:02:10 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1425431123, format:json, prefix:fs subvolume deauthorize, sub_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, vol_name:cephfs) < ""
Jan 23 19:02:10 compute-0 ceph-mon[75097]: pgmap v957: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s wr, 3 op/s
Jan 23 19:02:10 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "auth_id": "tempest-cephx-id-1425431123", "format": "json"}]: dispatch
Jan 23 19:02:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1425431123", "format": "json"} v 0)
Jan 23 19:02:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1425431123", "format": "json"} : dispatch
Jan 23 19:02:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1425431123"} v 0)
Jan 23 19:02:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1425431123"} : dispatch
Jan 23 19:02:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1425431123"}]': finished
Jan 23 19:02:10 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1425431123, format:json, prefix:fs subvolume deauthorize, sub_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, vol_name:cephfs) < ""
Jan 23 19:02:10 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "auth_id": "tempest-cephx-id-1425431123", "format": "json"}]: dispatch
Jan 23 19:02:10 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1425431123, format:json, prefix:fs subvolume evict, sub_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, vol_name:cephfs) < ""
Jan 23 19:02:10 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1425431123, client_metadata.root=/volumes/_nogroup/957eb80d-b96f-4546-9b33-5d50aefacdfe/229e3528-6bd5-404f-9247-9cadfdbd82bc
Jan 23 19:02:10 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=tempest-cephx-id-1425431123,client_metadata.root=/volumes/_nogroup/957eb80d-b96f-4546-9b33-5d50aefacdfe/229e3528-6bd5-404f-9247-9cadfdbd82bc],prefix=session evict} (starting...)
Jan 23 19:02:10 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:02:10 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1425431123, format:json, prefix:fs subvolume evict, sub_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, vol_name:cephfs) < ""
Jan 23 19:02:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:02:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1425431123", "format": "json"} : dispatch
Jan 23 19:02:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1425431123"} : dispatch
Jan 23 19:02:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1425431123"}]': finished
Jan 23 19:02:11 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "auth_id": "tempest-cephx-id-1425431123", "format": "json"}]: dispatch
Jan 23 19:02:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s wr, 2 op/s
Jan 23 19:02:12 compute-0 ceph-mon[75097]: pgmap v958: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s wr, 2 op/s
Jan 23 19:02:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:02:13.429 155616 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:3d:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:eb:2e:86:1a:4d'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 19:02:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:02:13.430 155616 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 19:02:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:02:13.583 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:02:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:02:13.583 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:02:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:02:13.583 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:02:14 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1291e2ec-1b23-4741-b658-cb06b2d77e76", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 23 19:02:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:1291e2ec-1b23-4741-b658-cb06b2d77e76, vol_name:cephfs) < ""
Jan 23 19:02:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Jan 23 19:02:14 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Jan 23 19:02:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0)
Jan 23 19:02:14 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Jan 23 19:02:14 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Jan 23 19:02:14 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Jan 23 19:02:14 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Jan 23 19:02:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:1291e2ec-1b23-4741-b658-cb06b2d77e76, vol_name:cephfs) < ""
Jan 23 19:02:14 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1291e2ec-1b23-4741-b658-cb06b2d77e76", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 23 19:02:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:1291e2ec-1b23-4741-b658-cb06b2d77e76, vol_name:cephfs) < ""
Jan 23 19:02:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/1291e2ec-1b23-4741-b658-cb06b2d77e76/ffba04e2-08cf-47b9-bd47-0043de8e96c3
Jan 23 19:02:14 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/1291e2ec-1b23-4741-b658-cb06b2d77e76/ffba04e2-08cf-47b9-bd47-0043de8e96c3],prefix=session evict} (starting...)
Jan 23 19:02:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:02:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:1291e2ec-1b23-4741-b658-cb06b2d77e76, vol_name:cephfs) < ""
Jan 23 19:02:14 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "638d8bf8-4cef-4bb0-add8-32a6d6937431", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:638d8bf8-4cef-4bb0-add8-32a6d6937431, vol_name:cephfs) < ""
Jan 23 19:02:14 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:02:14.432 155616 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d273cd83-f2f8-4b91-b18f-957051ff2a6c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 19:02:14 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/638d8bf8-4cef-4bb0-add8-32a6d6937431/e56f1a85-9f86-4f29-8f7c-a28f7c361b9c'.
Jan 23 19:02:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/638d8bf8-4cef-4bb0-add8-32a6d6937431/.meta.tmp'
Jan 23 19:02:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/638d8bf8-4cef-4bb0-add8-32a6d6937431/.meta.tmp' to config b'/volumes/_nogroup/638d8bf8-4cef-4bb0-add8-32a6d6937431/.meta'
Jan 23 19:02:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:638d8bf8-4cef-4bb0-add8-32a6d6937431, vol_name:cephfs) < ""
Jan 23 19:02:14 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "638d8bf8-4cef-4bb0-add8-32a6d6937431", "format": "json"}]: dispatch
Jan 23 19:02:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:638d8bf8-4cef-4bb0-add8-32a6d6937431, vol_name:cephfs) < ""
Jan 23 19:02:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:638d8bf8-4cef-4bb0-add8-32a6d6937431, vol_name:cephfs) < ""
Jan 23 19:02:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:02:14 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 36 KiB/s wr, 4 op/s
Jan 23 19:02:15 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1291e2ec-1b23-4741-b658-cb06b2d77e76", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 23 19:02:15 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Jan 23 19:02:15 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1291e2ec-1b23-4741-b658-cb06b2d77e76", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 23 19:02:15 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "638d8bf8-4cef-4bb0-add8-32a6d6937431", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:15 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "638d8bf8-4cef-4bb0-add8-32a6d6937431", "format": "json"}]: dispatch
Jan 23 19:02:15 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:15 compute-0 ceph-mon[75097]: pgmap v959: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 36 KiB/s wr, 4 op/s
Jan 23 19:02:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 16 KiB/s wr, 2 op/s
Jan 23 19:02:16 compute-0 ceph-mon[75097]: pgmap v960: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 16 KiB/s wr, 2 op/s
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cea1143e-99c2-47fd-9222-42d7395a438e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cea1143e-99c2-47fd-9222-42d7395a438e, vol_name:cephfs) < ""
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/cea1143e-99c2-47fd-9222-42d7395a438e/16ff9829-c1d2-49e8-8fb9-df5f99e144d5'.
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cea1143e-99c2-47fd-9222-42d7395a438e/.meta.tmp'
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cea1143e-99c2-47fd-9222-42d7395a438e/.meta.tmp' to config b'/volumes/_nogroup/cea1143e-99c2-47fd-9222-42d7395a438e/.meta'
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cea1143e-99c2-47fd-9222-42d7395a438e, vol_name:cephfs) < ""
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cea1143e-99c2-47fd-9222-42d7395a438e", "format": "json"}]: dispatch
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cea1143e-99c2-47fd-9222-42d7395a438e, vol_name:cephfs) < ""
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cea1143e-99c2-47fd-9222-42d7395a438e, vol_name:cephfs) < ""
Jan 23 19:02:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:02:16 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0e81367e-537f-4aee-8570-08ff0c9c7c27", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0e81367e-537f-4aee-8570-08ff0c9c7c27, vol_name:cephfs) < ""
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/0e81367e-537f-4aee-8570-08ff0c9c7c27/eae16b10-8aa6-446d-a800-4d1b31446b6e'.
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0e81367e-537f-4aee-8570-08ff0c9c7c27/.meta.tmp'
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0e81367e-537f-4aee-8570-08ff0c9c7c27/.meta.tmp' to config b'/volumes/_nogroup/0e81367e-537f-4aee-8570-08ff0c9c7c27/.meta'
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0e81367e-537f-4aee-8570-08ff0c9c7c27, vol_name:cephfs) < ""
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0e81367e-537f-4aee-8570-08ff0c9c7c27", "format": "json"}]: dispatch
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0e81367e-537f-4aee-8570-08ff0c9c7c27, vol_name:cephfs) < ""
Jan 23 19:02:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0e81367e-537f-4aee-8570-08ff0c9c7c27, vol_name:cephfs) < ""
Jan 23 19:02:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:02:16 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:17 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cea1143e-99c2-47fd-9222-42d7395a438e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:17 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cea1143e-99c2-47fd-9222-42d7395a438e", "format": "json"}]: dispatch
Jan 23 19:02:17 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:17 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0e81367e-537f-4aee-8570-08ff0c9c7c27", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:17 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0e81367e-537f-4aee-8570-08ff0c9c7c27", "format": "json"}]: dispatch
Jan 23 19:02:17 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:17 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "27d42cf3-c581-4351-85ac-a91e5a37a2c7", "auth_id": "admin", "tenant_id": "e19ff7e178044a3f9516cdc773fa4d6b", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:02:17 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:27d42cf3-c581-4351-85ac-a91e5a37a2c7, tenant_id:e19ff7e178044a3f9516cdc773fa4d6b, vol_name:cephfs) < ""
Jan 23 19:02:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0)
Jan 23 19:02:17 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Jan 23 19:02:17 compute-0 ceph-mgr[75390]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Jan 23 19:02:17 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:27d42cf3-c581-4351-85ac-a91e5a37a2c7, tenant_id:e19ff7e178044a3f9516cdc773fa4d6b, vol_name:cephfs) < ""
Jan 23 19:02:17 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:17.659+0000 7f06a7d20640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Jan 23 19:02:17 compute-0 ceph-mgr[75390]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Jan 23 19:02:18 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8254111f-72e3-43c6-937a-37e35e2a9b5f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:18 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8254111f-72e3-43c6-937a-37e35e2a9b5f, vol_name:cephfs) < ""
Jan 23 19:02:18 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/8254111f-72e3-43c6-937a-37e35e2a9b5f/6c61b547-9f38-4188-9422-572853b300e5'.
Jan 23 19:02:18 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8254111f-72e3-43c6-937a-37e35e2a9b5f/.meta.tmp'
Jan 23 19:02:18 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8254111f-72e3-43c6-937a-37e35e2a9b5f/.meta.tmp' to config b'/volumes/_nogroup/8254111f-72e3-43c6-937a-37e35e2a9b5f/.meta'
Jan 23 19:02:18 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8254111f-72e3-43c6-937a-37e35e2a9b5f, vol_name:cephfs) < ""
Jan 23 19:02:18 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8254111f-72e3-43c6-937a-37e35e2a9b5f", "format": "json"}]: dispatch
Jan 23 19:02:18 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8254111f-72e3-43c6-937a-37e35e2a9b5f, vol_name:cephfs) < ""
Jan 23 19:02:18 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8254111f-72e3-43c6-937a-37e35e2a9b5f, vol_name:cephfs) < ""
Jan 23 19:02:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:02:18 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 16 KiB/s wr, 2 op/s
Jan 23 19:02:18 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "27d42cf3-c581-4351-85ac-a91e5a37a2c7", "auth_id": "admin", "tenant_id": "e19ff7e178044a3f9516cdc773fa4d6b", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:02:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Jan 23 19:02:18 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8254111f-72e3-43c6-937a-37e35e2a9b5f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:18 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8254111f-72e3-43c6-937a-37e35e2a9b5f", "format": "json"}]: dispatch
Jan 23 19:02:18 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:18 compute-0 ceph-mon[75097]: pgmap v961: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 16 KiB/s wr, 2 op/s
Jan 23 19:02:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 49 KiB/s wr, 7 op/s
Jan 23 19:02:20 compute-0 ceph-mon[75097]: pgmap v962: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 49 KiB/s wr, 7 op/s
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "27d42cf3-c581-4351-85ac-a91e5a37a2c7", "auth_id": "david", "tenant_id": "e19ff7e178044a3f9516cdc773fa4d6b", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:27d42cf3-c581-4351-85ac-a91e5a37a2c7, tenant_id:e19ff7e178044a3f9516cdc773fa4d6b, vol_name:cephfs) < ""
Jan 23 19:02:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Jan 23 19:02:21 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Jan 23 19:02:21 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID david with tenant e19ff7e178044a3f9516cdc773fa4d6b
Jan 23 19:02:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/27d42cf3-c581-4351-85ac-a91e5a37a2c7/c1ace486-d1b1-40ea-947d-f01d12491da4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_27d42cf3-c581-4351-85ac-a91e5a37a2c7", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:02:21 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/27d42cf3-c581-4351-85ac-a91e5a37a2c7/c1ace486-d1b1-40ea-947d-f01d12491da4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_27d42cf3-c581-4351-85ac-a91e5a37a2c7", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:02:21 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/27d42cf3-c581-4351-85ac-a91e5a37a2c7/c1ace486-d1b1-40ea-947d-f01d12491da4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_27d42cf3-c581-4351-85ac-a91e5a37a2c7", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:02:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:27d42cf3-c581-4351-85ac-a91e5a37a2c7, tenant_id:e19ff7e178044a3f9516cdc773fa4d6b, vol_name:cephfs) < ""
Jan 23 19:02:21 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "27d42cf3-c581-4351-85ac-a91e5a37a2c7", "auth_id": "david", "tenant_id": "e19ff7e178044a3f9516cdc773fa4d6b", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:02:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Jan 23 19:02:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/27d42cf3-c581-4351-85ac-a91e5a37a2c7/c1ace486-d1b1-40ea-947d-f01d12491da4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_27d42cf3-c581-4351-85ac-a91e5a37a2c7", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:02:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/27d42cf3-c581-4351-85ac-a91e5a37a2c7/c1ace486-d1b1-40ea-947d-f01d12491da4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_27d42cf3-c581-4351-85ac-a91e5a37a2c7", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0e81367e-537f-4aee-8570-08ff0c9c7c27", "format": "json"}]: dispatch
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0e81367e-537f-4aee-8570-08ff0c9c7c27, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0e81367e-537f-4aee-8570-08ff0c9c7c27, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:21 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:21.669+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0e81367e-537f-4aee-8570-08ff0c9c7c27' of type subvolume
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0e81367e-537f-4aee-8570-08ff0c9c7c27' of type subvolume
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0e81367e-537f-4aee-8570-08ff0c9c7c27", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0e81367e-537f-4aee-8570-08ff0c9c7c27, vol_name:cephfs) < ""
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0e81367e-537f-4aee-8570-08ff0c9c7c27'' moved to trashcan
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0e81367e-537f-4aee-8570-08ff0c9c7c27, vol_name:cephfs) < ""
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8254111f-72e3-43c6-937a-37e35e2a9b5f", "format": "json"}]: dispatch
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8254111f-72e3-43c6-937a-37e35e2a9b5f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8254111f-72e3-43c6-937a-37e35e2a9b5f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:21 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:21.843+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8254111f-72e3-43c6-937a-37e35e2a9b5f' of type subvolume
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8254111f-72e3-43c6-937a-37e35e2a9b5f' of type subvolume
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8254111f-72e3-43c6-937a-37e35e2a9b5f", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8254111f-72e3-43c6-937a-37e35e2a9b5f, vol_name:cephfs) < ""
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8254111f-72e3-43c6-937a-37e35e2a9b5f'' moved to trashcan
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:02:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8254111f-72e3-43c6-937a-37e35e2a9b5f, vol_name:cephfs) < ""
Jan 23 19:02:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 48 KiB/s wr, 7 op/s
Jan 23 19:02:22 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0e81367e-537f-4aee-8570-08ff0c9c7c27", "format": "json"}]: dispatch
Jan 23 19:02:22 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0e81367e-537f-4aee-8570-08ff0c9c7c27", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:22 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8254111f-72e3-43c6-937a-37e35e2a9b5f", "format": "json"}]: dispatch
Jan 23 19:02:22 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8254111f-72e3-43c6-937a-37e35e2a9b5f", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:22 compute-0 ceph-mon[75097]: pgmap v963: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 48 KiB/s wr, 7 op/s
Jan 23 19:02:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 78 KiB/s wr, 10 op/s
Jan 23 19:02:24 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4c4cc1cb-4d10-4014-806c-a3aa7cd153c6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4c4cc1cb-4d10-4014-806c-a3aa7cd153c6, vol_name:cephfs) < ""
Jan 23 19:02:25 compute-0 ceph-mon[75097]: pgmap v964: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 78 KiB/s wr, 10 op/s
Jan 23 19:02:25 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/4c4cc1cb-4d10-4014-806c-a3aa7cd153c6/c65e0bdc-4763-488c-8bc4-5dc94ca352d7'.
Jan 23 19:02:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4c4cc1cb-4d10-4014-806c-a3aa7cd153c6/.meta.tmp'
Jan 23 19:02:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4c4cc1cb-4d10-4014-806c-a3aa7cd153c6/.meta.tmp' to config b'/volumes/_nogroup/4c4cc1cb-4d10-4014-806c-a3aa7cd153c6/.meta'
Jan 23 19:02:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4c4cc1cb-4d10-4014-806c-a3aa7cd153c6, vol_name:cephfs) < ""
Jan 23 19:02:25 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4c4cc1cb-4d10-4014-806c-a3aa7cd153c6", "format": "json"}]: dispatch
Jan 23 19:02:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4c4cc1cb-4d10-4014-806c-a3aa7cd153c6, vol_name:cephfs) < ""
Jan 23 19:02:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4c4cc1cb-4d10-4014-806c-a3aa7cd153c6, vol_name:cephfs) < ""
Jan 23 19:02:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:02:25 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:25 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea8f2387-32ae-461c-a095-2db8629c4d12", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea8f2387-32ae-461c-a095-2db8629c4d12, vol_name:cephfs) < ""
Jan 23 19:02:25 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ea8f2387-32ae-461c-a095-2db8629c4d12/1c646e6e-b031-4cb5-a96f-273ae0fc543e'.
Jan 23 19:02:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea8f2387-32ae-461c-a095-2db8629c4d12/.meta.tmp'
Jan 23 19:02:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea8f2387-32ae-461c-a095-2db8629c4d12/.meta.tmp' to config b'/volumes/_nogroup/ea8f2387-32ae-461c-a095-2db8629c4d12/.meta'
Jan 23 19:02:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea8f2387-32ae-461c-a095-2db8629c4d12, vol_name:cephfs) < ""
Jan 23 19:02:25 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea8f2387-32ae-461c-a095-2db8629c4d12", "format": "json"}]: dispatch
Jan 23 19:02:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea8f2387-32ae-461c-a095-2db8629c4d12, vol_name:cephfs) < ""
Jan 23 19:02:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea8f2387-32ae-461c-a095-2db8629c4d12, vol_name:cephfs) < ""
Jan 23 19:02:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:02:25 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:26 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4c4cc1cb-4d10-4014-806c-a3aa7cd153c6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:26 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4c4cc1cb-4d10-4014-806c-a3aa7cd153c6", "format": "json"}]: dispatch
Jan 23 19:02:26 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:26 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea8f2387-32ae-461c-a095-2db8629c4d12", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:26 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea8f2387-32ae-461c-a095-2db8629c4d12", "format": "json"}]: dispatch
Jan 23 19:02:26 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:02:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 62 KiB/s wr, 8 op/s
Jan 23 19:02:27 compute-0 ceph-mon[75097]: pgmap v965: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 62 KiB/s wr, 8 op/s
Jan 23 19:02:28 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4c4cc1cb-4d10-4014-806c-a3aa7cd153c6", "auth_id": "david", "tenant_id": "db171ed11cd54288960e2675bc51a3a1", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:02:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:4c4cc1cb-4d10-4014-806c-a3aa7cd153c6, tenant_id:db171ed11cd54288960e2675bc51a3a1, vol_name:cephfs) < ""
Jan 23 19:02:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Jan 23 19:02:28 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Jan 23 19:02:28 compute-0 ceph-mgr[75390]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use
Jan 23 19:02:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:4c4cc1cb-4d10-4014-806c-a3aa7cd153c6, tenant_id:db171ed11cd54288960e2675bc51a3a1, vol_name:cephfs) < ""
Jan 23 19:02:28 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:28.346+0000 7f06a7d20640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Jan 23 19:02:28 compute-0 ceph-mgr[75390]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Jan 23 19:02:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 62 KiB/s wr, 8 op/s
Jan 23 19:02:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Jan 23 19:02:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:02:28
Jan 23 19:02:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:02:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:02:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'backups', '.mgr', 'vms', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log']
Jan 23 19:02:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:02:29 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4c4cc1cb-4d10-4014-806c-a3aa7cd153c6", "auth_id": "david", "tenant_id": "db171ed11cd54288960e2675bc51a3a1", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:02:29 compute-0 ceph-mon[75097]: pgmap v966: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 62 KiB/s wr, 8 op/s
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea8f2387-32ae-461c-a095-2db8629c4d12", "format": "json"}]: dispatch
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ea8f2387-32ae-461c-a095-2db8629c4d12, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ea8f2387-32ae-461c-a095-2db8629c4d12, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:30 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:30.229+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea8f2387-32ae-461c-a095-2db8629c4d12' of type subvolume
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea8f2387-32ae-461c-a095-2db8629c4d12' of type subvolume
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea8f2387-32ae-461c-a095-2db8629c4d12", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea8f2387-32ae-461c-a095-2db8629c4d12, vol_name:cephfs) < ""
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ea8f2387-32ae-461c-a095-2db8629c4d12'' moved to trashcan
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea8f2387-32ae-461c-a095-2db8629c4d12, vol_name:cephfs) < ""
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cea1143e-99c2-47fd-9222-42d7395a438e", "format": "json"}]: dispatch
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cea1143e-99c2-47fd-9222-42d7395a438e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cea1143e-99c2-47fd-9222-42d7395a438e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:30 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:30.451+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cea1143e-99c2-47fd-9222-42d7395a438e' of type subvolume
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cea1143e-99c2-47fd-9222-42d7395a438e' of type subvolume
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cea1143e-99c2-47fd-9222-42d7395a438e", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cea1143e-99c2-47fd-9222-42d7395a438e, vol_name:cephfs) < ""
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 77 KiB/s wr, 10 op/s
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cea1143e-99c2-47fd-9222-42d7395a438e'' moved to trashcan
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:02:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cea1143e-99c2-47fd-9222-42d7395a438e, vol_name:cephfs) < ""
Jan 23 19:02:30 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea8f2387-32ae-461c-a095-2db8629c4d12", "format": "json"}]: dispatch
Jan 23 19:02:30 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea8f2387-32ae-461c-a095-2db8629c4d12", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:30 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cea1143e-99c2-47fd-9222-42d7395a438e", "format": "json"}]: dispatch
Jan 23 19:02:30 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cea1143e-99c2-47fd-9222-42d7395a438e", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:30 compute-0 ceph-mon[75097]: pgmap v967: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 77 KiB/s wr, 10 op/s
Jan 23 19:02:30 compute-0 podman[246278]: 2026-01-23 19:02:30.998874342 +0000 UTC m=+0.086863457 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:02:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4c4cc1cb-4d10-4014-806c-a3aa7cd153c6", "auth_id": "david", "format": "json"}]: dispatch
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:4c4cc1cb-4d10-4014-806c-a3aa7cd153c6, vol_name:cephfs) < ""
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume '4c4cc1cb-4d10-4014-806c-a3aa7cd153c6'
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:4c4cc1cb-4d10-4014-806c-a3aa7cd153c6, vol_name:cephfs) < ""
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4c4cc1cb-4d10-4014-806c-a3aa7cd153c6", "auth_id": "david", "format": "json"}]: dispatch
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:4c4cc1cb-4d10-4014-806c-a3aa7cd153c6, vol_name:cephfs) < ""
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/4c4cc1cb-4d10-4014-806c-a3aa7cd153c6/c65e0bdc-4763-488c-8bc4-5dc94ca352d7
Jan 23 19:02:31 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/4c4cc1cb-4d10-4014-806c-a3aa7cd153c6/c65e0bdc-4763-488c-8bc4-5dc94ca352d7],prefix=session evict} (starting...)
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:02:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:4c4cc1cb-4d10-4014-806c-a3aa7cd153c6, vol_name:cephfs) < ""
Jan 23 19:02:32 compute-0 sshd-session[246304]: Connection closed by authenticating user root 221.126.252.82 port 4953 [preauth]
Jan 23 19:02:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 44 KiB/s wr, 5 op/s
Jan 23 19:02:32 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4c4cc1cb-4d10-4014-806c-a3aa7cd153c6", "auth_id": "david", "format": "json"}]: dispatch
Jan 23 19:02:32 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4c4cc1cb-4d10-4014-806c-a3aa7cd153c6", "auth_id": "david", "format": "json"}]: dispatch
Jan 23 19:02:32 compute-0 ceph-mon[75097]: pgmap v968: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 44 KiB/s wr, 5 op/s
Jan 23 19:02:33 compute-0 sudo[246307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:02:33 compute-0 sudo[246307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:02:33 compute-0 sudo[246307]: pam_unix(sudo:session): session closed for user root
Jan 23 19:02:33 compute-0 sudo[246332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 19:02:33 compute-0 sudo[246332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:02:33 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9ff37cdd-dadb-4205-9f9d-26df1c040f98", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9ff37cdd-dadb-4205-9f9d-26df1c040f98, vol_name:cephfs) < ""
Jan 23 19:02:33 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/9ff37cdd-dadb-4205-9f9d-26df1c040f98/ca70c03c-fab0-43fc-8eb7-994fe6b42801'.
Jan 23 19:02:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9ff37cdd-dadb-4205-9f9d-26df1c040f98/.meta.tmp'
Jan 23 19:02:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9ff37cdd-dadb-4205-9f9d-26df1c040f98/.meta.tmp' to config b'/volumes/_nogroup/9ff37cdd-dadb-4205-9f9d-26df1c040f98/.meta'
Jan 23 19:02:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9ff37cdd-dadb-4205-9f9d-26df1c040f98, vol_name:cephfs) < ""
Jan 23 19:02:33 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9ff37cdd-dadb-4205-9f9d-26df1c040f98", "format": "json"}]: dispatch
Jan 23 19:02:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9ff37cdd-dadb-4205-9f9d-26df1c040f98, vol_name:cephfs) < ""
Jan 23 19:02:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9ff37cdd-dadb-4205-9f9d-26df1c040f98, vol_name:cephfs) < ""
Jan 23 19:02:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:02:33 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:33 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:34 compute-0 sudo[246332]: pam_unix(sudo:session): session closed for user root
Jan 23 19:02:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:02:34 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:02:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 19:02:34 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:02:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 19:02:34 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:02:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 19:02:34 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:02:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 19:02:34 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:02:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:02:34 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:02:34 compute-0 sudo[246388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:02:34 compute-0 sudo[246388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:02:34 compute-0 sudo[246388]: pam_unix(sudo:session): session closed for user root
Jan 23 19:02:34 compute-0 sudo[246413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 19:02:34 compute-0 sudo[246413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:02:34 compute-0 podman[246450]: 2026-01-23 19:02:34.49716731 +0000 UTC m=+0.041482891 container create 6e79a10438e3d0861c0ab7409ac4a80f2ecea8e508f537323805dac0af1faeda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:02:34 compute-0 systemd[1]: Started libpod-conmon-6e79a10438e3d0861c0ab7409ac4a80f2ecea8e508f537323805dac0af1faeda.scope.
Jan 23 19:02:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 8 op/s
Jan 23 19:02:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:02:34 compute-0 podman[246450]: 2026-01-23 19:02:34.47908026 +0000 UTC m=+0.023395861 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:02:34 compute-0 podman[246450]: 2026-01-23 19:02:34.657383254 +0000 UTC m=+0.201698845 container init 6e79a10438e3d0861c0ab7409ac4a80f2ecea8e508f537323805dac0af1faeda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:02:34 compute-0 podman[246450]: 2026-01-23 19:02:34.665588083 +0000 UTC m=+0.209903664 container start 6e79a10438e3d0861c0ab7409ac4a80f2ecea8e508f537323805dac0af1faeda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 23 19:02:34 compute-0 magical_lehmann[246466]: 167 167
Jan 23 19:02:34 compute-0 systemd[1]: libpod-6e79a10438e3d0861c0ab7409ac4a80f2ecea8e508f537323805dac0af1faeda.scope: Deactivated successfully.
Jan 23 19:02:34 compute-0 podman[246450]: 2026-01-23 19:02:34.681307466 +0000 UTC m=+0.225623047 container attach 6e79a10438e3d0861c0ab7409ac4a80f2ecea8e508f537323805dac0af1faeda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:02:34 compute-0 podman[246450]: 2026-01-23 19:02:34.681677865 +0000 UTC m=+0.225993446 container died 6e79a10438e3d0861c0ab7409ac4a80f2ecea8e508f537323805dac0af1faeda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 23 19:02:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1613d7a66e26afc4a3df429dfa0b2d4bc012e74442ce41a8fe38bb8cb75384cc-merged.mount: Deactivated successfully.
Jan 23 19:02:34 compute-0 podman[246450]: 2026-01-23 19:02:34.751922636 +0000 UTC m=+0.296238257 container remove 6e79a10438e3d0861c0ab7409ac4a80f2ecea8e508f537323805dac0af1faeda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lehmann, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 23 19:02:34 compute-0 systemd[1]: libpod-conmon-6e79a10438e3d0861c0ab7409ac4a80f2ecea8e508f537323805dac0af1faeda.scope: Deactivated successfully.
Jan 23 19:02:34 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9ff37cdd-dadb-4205-9f9d-26df1c040f98", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:34 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9ff37cdd-dadb-4205-9f9d-26df1c040f98", "format": "json"}]: dispatch
Jan 23 19:02:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:02:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:02:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:02:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:02:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:02:34 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:02:34 compute-0 ceph-mon[75097]: pgmap v969: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 8 op/s
Jan 23 19:02:34 compute-0 podman[246492]: 2026-01-23 19:02:34.911212766 +0000 UTC m=+0.039211906 container create 6bf0c38dcba07fd4bf4e68dfcf4d4aa3d5d88ffb4420cfff9fced42b9eab6b7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 23 19:02:34 compute-0 systemd[1]: Started libpod-conmon-6bf0c38dcba07fd4bf4e68dfcf4d4aa3d5d88ffb4420cfff9fced42b9eab6b7e.scope.
Jan 23 19:02:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3da6aeef80805a99286b6e36a52454deeb8ae1ba1c613ce46df0fd3d1b471e3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3da6aeef80805a99286b6e36a52454deeb8ae1ba1c613ce46df0fd3d1b471e3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3da6aeef80805a99286b6e36a52454deeb8ae1ba1c613ce46df0fd3d1b471e3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3da6aeef80805a99286b6e36a52454deeb8ae1ba1c613ce46df0fd3d1b471e3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3da6aeef80805a99286b6e36a52454deeb8ae1ba1c613ce46df0fd3d1b471e3c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 19:02:34 compute-0 podman[246492]: 2026-01-23 19:02:34.895484544 +0000 UTC m=+0.023483704 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:02:34 compute-0 podman[246492]: 2026-01-23 19:02:34.993210254 +0000 UTC m=+0.121209414 container init 6bf0c38dcba07fd4bf4e68dfcf4d4aa3d5d88ffb4420cfff9fced42b9eab6b7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_mestorf, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 23 19:02:34 compute-0 podman[246492]: 2026-01-23 19:02:34.999652301 +0000 UTC m=+0.127651441 container start 6bf0c38dcba07fd4bf4e68dfcf4d4aa3d5d88ffb4420cfff9fced42b9eab6b7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 19:02:35 compute-0 podman[246492]: 2026-01-23 19:02:35.003423883 +0000 UTC m=+0.131423053 container attach 6bf0c38dcba07fd4bf4e68dfcf4d4aa3d5d88ffb4420cfff9fced42b9eab6b7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_mestorf, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 23 19:02:35 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "27d42cf3-c581-4351-85ac-a91e5a37a2c7", "auth_id": "david", "format": "json"}]: dispatch
Jan 23 19:02:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:27d42cf3-c581-4351-85ac-a91e5a37a2c7, vol_name:cephfs) < ""
Jan 23 19:02:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Jan 23 19:02:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Jan 23 19:02:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0)
Jan 23 19:02:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Jan 23 19:02:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Jan 23 19:02:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:27d42cf3-c581-4351-85ac-a91e5a37a2c7, vol_name:cephfs) < ""
Jan 23 19:02:35 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "27d42cf3-c581-4351-85ac-a91e5a37a2c7", "auth_id": "david", "format": "json"}]: dispatch
Jan 23 19:02:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:27d42cf3-c581-4351-85ac-a91e5a37a2c7, vol_name:cephfs) < ""
Jan 23 19:02:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/27d42cf3-c581-4351-85ac-a91e5a37a2c7/c1ace486-d1b1-40ea-947d-f01d12491da4
Jan 23 19:02:35 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/27d42cf3-c581-4351-85ac-a91e5a37a2c7/c1ace486-d1b1-40ea-947d-f01d12491da4],prefix=session evict} (starting...)
Jan 23 19:02:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:02:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:27d42cf3-c581-4351-85ac-a91e5a37a2c7, vol_name:cephfs) < ""
Jan 23 19:02:35 compute-0 wizardly_mestorf[246508]: --> passed data devices: 0 physical, 3 LVM
Jan 23 19:02:35 compute-0 wizardly_mestorf[246508]: --> All data devices are unavailable
Jan 23 19:02:35 compute-0 systemd[1]: libpod-6bf0c38dcba07fd4bf4e68dfcf4d4aa3d5d88ffb4420cfff9fced42b9eab6b7e.scope: Deactivated successfully.
Jan 23 19:02:35 compute-0 conmon[246508]: conmon 6bf0c38dcba07fd4bf4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6bf0c38dcba07fd4bf4e68dfcf4d4aa3d5d88ffb4420cfff9fced42b9eab6b7e.scope/container/memory.events
Jan 23 19:02:35 compute-0 podman[246492]: 2026-01-23 19:02:35.472001798 +0000 UTC m=+0.600000938 container died 6bf0c38dcba07fd4bf4e68dfcf4d4aa3d5d88ffb4420cfff9fced42b9eab6b7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:02:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-3da6aeef80805a99286b6e36a52454deeb8ae1ba1c613ce46df0fd3d1b471e3c-merged.mount: Deactivated successfully.
Jan 23 19:02:35 compute-0 podman[246492]: 2026-01-23 19:02:35.519277279 +0000 UTC m=+0.647276419 container remove 6bf0c38dcba07fd4bf4e68dfcf4d4aa3d5d88ffb4420cfff9fced42b9eab6b7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 19:02:35 compute-0 systemd[1]: libpod-conmon-6bf0c38dcba07fd4bf4e68dfcf4d4aa3d5d88ffb4420cfff9fced42b9eab6b7e.scope: Deactivated successfully.
Jan 23 19:02:35 compute-0 sudo[246413]: pam_unix(sudo:session): session closed for user root
Jan 23 19:02:35 compute-0 sudo[246541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:02:35 compute-0 sudo[246541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:02:35 compute-0 sudo[246541]: pam_unix(sudo:session): session closed for user root
Jan 23 19:02:35 compute-0 sudo[246566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 19:02:35 compute-0 sudo[246566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:02:35 compute-0 podman[246604]: 2026-01-23 19:02:35.970654755 +0000 UTC m=+0.043357087 container create bf813a8b74ebd298fe127e6c0bcf25c3463467532cc15d013ef54371430429fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_elgamal, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:02:36 compute-0 systemd[1]: Started libpod-conmon-bf813a8b74ebd298fe127e6c0bcf25c3463467532cc15d013ef54371430429fd.scope.
Jan 23 19:02:36 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:02:36 compute-0 podman[246604]: 2026-01-23 19:02:36.03242307 +0000 UTC m=+0.105125412 container init bf813a8b74ebd298fe127e6c0bcf25c3463467532cc15d013ef54371430429fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_elgamal, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 23 19:02:36 compute-0 podman[246604]: 2026-01-23 19:02:36.039749788 +0000 UTC m=+0.112452110 container start bf813a8b74ebd298fe127e6c0bcf25c3463467532cc15d013ef54371430429fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 23 19:02:36 compute-0 podman[246604]: 2026-01-23 19:02:36.043305395 +0000 UTC m=+0.116007717 container attach bf813a8b74ebd298fe127e6c0bcf25c3463467532cc15d013ef54371430429fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_elgamal, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 19:02:36 compute-0 podman[246604]: 2026-01-23 19:02:35.951667652 +0000 UTC m=+0.024369994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:02:36 compute-0 blissful_elgamal[246620]: 167 167
Jan 23 19:02:36 compute-0 systemd[1]: libpod-bf813a8b74ebd298fe127e6c0bcf25c3463467532cc15d013ef54371430429fd.scope: Deactivated successfully.
Jan 23 19:02:36 compute-0 podman[246604]: 2026-01-23 19:02:36.046435311 +0000 UTC m=+0.119137643 container died bf813a8b74ebd298fe127e6c0bcf25c3463467532cc15d013ef54371430429fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 19:02:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fb23575e2485f37c2301feb6df6407c379add1e22ba4aeeb9753b6021d4f820-merged.mount: Deactivated successfully.
Jan 23 19:02:36 compute-0 podman[246604]: 2026-01-23 19:02:36.08334618 +0000 UTC m=+0.156048502 container remove bf813a8b74ebd298fe127e6c0bcf25c3463467532cc15d013ef54371430429fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_elgamal, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 23 19:02:36 compute-0 systemd[1]: libpod-conmon-bf813a8b74ebd298fe127e6c0bcf25c3463467532cc15d013ef54371430429fd.scope: Deactivated successfully.
Jan 23 19:02:36 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "27d42cf3-c581-4351-85ac-a91e5a37a2c7", "auth_id": "david", "format": "json"}]: dispatch
Jan 23 19:02:36 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Jan 23 19:02:36 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Jan 23 19:02:36 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Jan 23 19:02:36 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "27d42cf3-c581-4351-85ac-a91e5a37a2c7", "auth_id": "david", "format": "json"}]: dispatch
Jan 23 19:02:36 compute-0 podman[246646]: 2026-01-23 19:02:36.237634299 +0000 UTC m=+0.040929579 container create eecf55028c2d10df59e773d4957301daa6a581a8cf60bb226a91fcf9f91b251c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_fermi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 19:02:36 compute-0 systemd[1]: Started libpod-conmon-eecf55028c2d10df59e773d4957301daa6a581a8cf60bb226a91fcf9f91b251c.scope.
Jan 23 19:02:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:02:36 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:02:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd325d97a10edf60c929a5d8c1bcd51fbb36a4a1ab799f13e49e42509a7722e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:02:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd325d97a10edf60c929a5d8c1bcd51fbb36a4a1ab799f13e49e42509a7722e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:02:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd325d97a10edf60c929a5d8c1bcd51fbb36a4a1ab799f13e49e42509a7722e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:02:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd325d97a10edf60c929a5d8c1bcd51fbb36a4a1ab799f13e49e42509a7722e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:02:36 compute-0 podman[246646]: 2026-01-23 19:02:36.22126083 +0000 UTC m=+0.024556140 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:02:36 compute-0 podman[246646]: 2026-01-23 19:02:36.322467145 +0000 UTC m=+0.125762455 container init eecf55028c2d10df59e773d4957301daa6a581a8cf60bb226a91fcf9f91b251c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:02:36 compute-0 podman[246646]: 2026-01-23 19:02:36.328976994 +0000 UTC m=+0.132272284 container start eecf55028c2d10df59e773d4957301daa6a581a8cf60bb226a91fcf9f91b251c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 23 19:02:36 compute-0 podman[246646]: 2026-01-23 19:02:36.332721154 +0000 UTC m=+0.136016474 container attach eecf55028c2d10df59e773d4957301daa6a581a8cf60bb226a91fcf9f91b251c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_fermi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:02:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 37 KiB/s wr, 5 op/s
Jan 23 19:02:36 compute-0 nervous_fermi[246662]: {
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:     "0": [
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:         {
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "devices": [
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "/dev/loop3"
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             ],
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "lv_name": "ceph_lv0",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "lv_size": "21470642176",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "name": "ceph_lv0",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "tags": {
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.cluster_name": "ceph",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.crush_device_class": "",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.encrypted": "0",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.objectstore": "bluestore",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.osd_id": "0",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.type": "block",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.vdo": "0",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.with_tpm": "0"
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             },
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "type": "block",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "vg_name": "ceph_vg0"
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:         }
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:     ],
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:     "1": [
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:         {
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "devices": [
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "/dev/loop4"
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             ],
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "lv_name": "ceph_lv1",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "lv_size": "21470642176",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "name": "ceph_lv1",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "tags": {
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.cluster_name": "ceph",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.crush_device_class": "",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.encrypted": "0",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.objectstore": "bluestore",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.osd_id": "1",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.type": "block",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.vdo": "0",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.with_tpm": "0"
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             },
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "type": "block",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "vg_name": "ceph_vg1"
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:         }
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:     ],
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:     "2": [
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:         {
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "devices": [
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "/dev/loop5"
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             ],
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "lv_name": "ceph_lv2",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "lv_size": "21470642176",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "name": "ceph_lv2",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "tags": {
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.cluster_name": "ceph",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.crush_device_class": "",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.encrypted": "0",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.objectstore": "bluestore",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.osd_id": "2",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.type": "block",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.vdo": "0",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:                 "ceph.with_tpm": "0"
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             },
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "type": "block",
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:             "vg_name": "ceph_vg2"
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:         }
Jan 23 19:02:36 compute-0 nervous_fermi[246662]:     ]
Jan 23 19:02:36 compute-0 nervous_fermi[246662]: }
Jan 23 19:02:36 compute-0 systemd[1]: libpod-eecf55028c2d10df59e773d4957301daa6a581a8cf60bb226a91fcf9f91b251c.scope: Deactivated successfully.
Jan 23 19:02:36 compute-0 podman[246646]: 2026-01-23 19:02:36.611917736 +0000 UTC m=+0.415213026 container died eecf55028c2d10df59e773d4957301daa6a581a8cf60bb226a91fcf9f91b251c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:02:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fd325d97a10edf60c929a5d8c1bcd51fbb36a4a1ab799f13e49e42509a7722e-merged.mount: Deactivated successfully.
Jan 23 19:02:36 compute-0 podman[246646]: 2026-01-23 19:02:36.652040194 +0000 UTC m=+0.455335484 container remove eecf55028c2d10df59e773d4957301daa6a581a8cf60bb226a91fcf9f91b251c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:02:36 compute-0 systemd[1]: libpod-conmon-eecf55028c2d10df59e773d4957301daa6a581a8cf60bb226a91fcf9f91b251c.scope: Deactivated successfully.
Jan 23 19:02:36 compute-0 sudo[246566]: pam_unix(sudo:session): session closed for user root
Jan 23 19:02:36 compute-0 sudo[246683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:02:36 compute-0 sudo[246683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:02:36 compute-0 sudo[246683]: pam_unix(sudo:session): session closed for user root
Jan 23 19:02:36 compute-0 sudo[246708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 19:02:36 compute-0 sudo[246708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:02:37 compute-0 podman[246746]: 2026-01-23 19:02:37.103719116 +0000 UTC m=+0.040347274 container create a4b15d9829647816ed398591e0f4cad856d8eba3c1a0b45b88e34ab29c2dcc34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 23 19:02:37 compute-0 systemd[1]: Started libpod-conmon-a4b15d9829647816ed398591e0f4cad856d8eba3c1a0b45b88e34ab29c2dcc34.scope.
Jan 23 19:02:37 compute-0 ceph-mon[75097]: pgmap v970: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 37 KiB/s wr, 5 op/s
Jan 23 19:02:37 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f6c4c50e-f72c-439c-a86a-fc2726fe09af", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:37 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f6c4c50e-f72c-439c-a86a-fc2726fe09af, vol_name:cephfs) < ""
Jan 23 19:02:37 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:02:37 compute-0 podman[246746]: 2026-01-23 19:02:37.08537958 +0000 UTC m=+0.022007758 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:02:37 compute-0 podman[246746]: 2026-01-23 19:02:37.220208114 +0000 UTC m=+0.156836302 container init a4b15d9829647816ed398591e0f4cad856d8eba3c1a0b45b88e34ab29c2dcc34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cannon, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:02:37 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f6c4c50e-f72c-439c-a86a-fc2726fe09af/2749c494-8646-4af9-9180-34590b901d74'.
Jan 23 19:02:37 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f6c4c50e-f72c-439c-a86a-fc2726fe09af/.meta.tmp'
Jan 23 19:02:37 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f6c4c50e-f72c-439c-a86a-fc2726fe09af/.meta.tmp' to config b'/volumes/_nogroup/f6c4c50e-f72c-439c-a86a-fc2726fe09af/.meta'
Jan 23 19:02:37 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f6c4c50e-f72c-439c-a86a-fc2726fe09af, vol_name:cephfs) < ""
Jan 23 19:02:37 compute-0 podman[246746]: 2026-01-23 19:02:37.231238122 +0000 UTC m=+0.167866280 container start a4b15d9829647816ed398591e0f4cad856d8eba3c1a0b45b88e34ab29c2dcc34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cannon, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Jan 23 19:02:37 compute-0 podman[246746]: 2026-01-23 19:02:37.235016685 +0000 UTC m=+0.171644843 container attach a4b15d9829647816ed398591e0f4cad856d8eba3c1a0b45b88e34ab29c2dcc34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cannon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 23 19:02:37 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f6c4c50e-f72c-439c-a86a-fc2726fe09af", "format": "json"}]: dispatch
Jan 23 19:02:37 compute-0 blissful_cannon[246762]: 167 167
Jan 23 19:02:37 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f6c4c50e-f72c-439c-a86a-fc2726fe09af, vol_name:cephfs) < ""
Jan 23 19:02:37 compute-0 systemd[1]: libpod-a4b15d9829647816ed398591e0f4cad856d8eba3c1a0b45b88e34ab29c2dcc34.scope: Deactivated successfully.
Jan 23 19:02:37 compute-0 podman[246746]: 2026-01-23 19:02:37.237753242 +0000 UTC m=+0.174381400 container died a4b15d9829647816ed398591e0f4cad856d8eba3c1a0b45b88e34ab29c2dcc34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:02:37 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f6c4c50e-f72c-439c-a86a-fc2726fe09af, vol_name:cephfs) < ""
Jan 23 19:02:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:02:37 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a2641fb7cd141562e0d2d77168b59827aa68cac958b8b161b665ba8d9ed93e4-merged.mount: Deactivated successfully.
Jan 23 19:02:37 compute-0 podman[246746]: 2026-01-23 19:02:37.274137468 +0000 UTC m=+0.210765626 container remove a4b15d9829647816ed398591e0f4cad856d8eba3c1a0b45b88e34ab29c2dcc34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 23 19:02:37 compute-0 systemd[1]: libpod-conmon-a4b15d9829647816ed398591e0f4cad856d8eba3c1a0b45b88e34ab29c2dcc34.scope: Deactivated successfully.
Jan 23 19:02:37 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9ff37cdd-dadb-4205-9f9d-26df1c040f98", "format": "json"}]: dispatch
Jan 23 19:02:37 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9ff37cdd-dadb-4205-9f9d-26df1c040f98, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:37 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9ff37cdd-dadb-4205-9f9d-26df1c040f98, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:37 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:37.403+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9ff37cdd-dadb-4205-9f9d-26df1c040f98' of type subvolume
Jan 23 19:02:37 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9ff37cdd-dadb-4205-9f9d-26df1c040f98' of type subvolume
Jan 23 19:02:37 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9ff37cdd-dadb-4205-9f9d-26df1c040f98", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:37 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9ff37cdd-dadb-4205-9f9d-26df1c040f98, vol_name:cephfs) < ""
Jan 23 19:02:37 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9ff37cdd-dadb-4205-9f9d-26df1c040f98'' moved to trashcan
Jan 23 19:02:37 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:02:37 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9ff37cdd-dadb-4205-9f9d-26df1c040f98, vol_name:cephfs) < ""
Jan 23 19:02:37 compute-0 podman[246784]: 2026-01-23 19:02:37.437014665 +0000 UTC m=+0.043104151 container create c5388fe87d305e1af2631bb81c4d6d3cd932d6fd2aa48b18f2e329736236141c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_liskov, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:02:37 compute-0 systemd[1]: Started libpod-conmon-c5388fe87d305e1af2631bb81c4d6d3cd932d6fd2aa48b18f2e329736236141c.scope.
Jan 23 19:02:37 compute-0 podman[246784]: 2026-01-23 19:02:37.418969486 +0000 UTC m=+0.025059002 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:02:37 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:02:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e162ce3959e5429ed16b8828a049e9b76bd6465080c92f0f138ca8391d0147/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:02:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e162ce3959e5429ed16b8828a049e9b76bd6465080c92f0f138ca8391d0147/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:02:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e162ce3959e5429ed16b8828a049e9b76bd6465080c92f0f138ca8391d0147/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:02:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e162ce3959e5429ed16b8828a049e9b76bd6465080c92f0f138ca8391d0147/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:02:37 compute-0 podman[246784]: 2026-01-23 19:02:37.534045229 +0000 UTC m=+0.140134745 container init c5388fe87d305e1af2631bb81c4d6d3cd932d6fd2aa48b18f2e329736236141c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_liskov, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:02:37 compute-0 podman[246784]: 2026-01-23 19:02:37.54271002 +0000 UTC m=+0.148799507 container start c5388fe87d305e1af2631bb81c4d6d3cd932d6fd2aa48b18f2e329736236141c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 23 19:02:37 compute-0 podman[246784]: 2026-01-23 19:02:37.546328658 +0000 UTC m=+0.152418144 container attach c5388fe87d305e1af2631bb81c4d6d3cd932d6fd2aa48b18f2e329736236141c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_liskov, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:02:38 compute-0 lvm[246879]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:02:38 compute-0 lvm[246880]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:02:38 compute-0 lvm[246880]: VG ceph_vg1 finished
Jan 23 19:02:38 compute-0 lvm[246879]: VG ceph_vg0 finished
Jan 23 19:02:38 compute-0 lvm[246882]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:02:38 compute-0 lvm[246882]: VG ceph_vg2 finished
Jan 23 19:02:38 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f6c4c50e-f72c-439c-a86a-fc2726fe09af", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:38 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f6c4c50e-f72c-439c-a86a-fc2726fe09af", "format": "json"}]: dispatch
Jan 23 19:02:38 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:38 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9ff37cdd-dadb-4205-9f9d-26df1c040f98", "format": "json"}]: dispatch
Jan 23 19:02:38 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9ff37cdd-dadb-4205-9f9d-26df1c040f98", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:38 compute-0 priceless_liskov[246800]: {}
Jan 23 19:02:38 compute-0 systemd[1]: libpod-c5388fe87d305e1af2631bb81c4d6d3cd932d6fd2aa48b18f2e329736236141c.scope: Deactivated successfully.
Jan 23 19:02:38 compute-0 podman[246784]: 2026-01-23 19:02:38.398303203 +0000 UTC m=+1.004392699 container died c5388fe87d305e1af2631bb81c4d6d3cd932d6fd2aa48b18f2e329736236141c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 23 19:02:38 compute-0 systemd[1]: libpod-c5388fe87d305e1af2631bb81c4d6d3cd932d6fd2aa48b18f2e329736236141c.scope: Consumed 1.340s CPU time.
Jan 23 19:02:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-17e162ce3959e5429ed16b8828a049e9b76bd6465080c92f0f138ca8391d0147-merged.mount: Deactivated successfully.
Jan 23 19:02:38 compute-0 podman[246784]: 2026-01-23 19:02:38.512456503 +0000 UTC m=+1.118545989 container remove c5388fe87d305e1af2631bb81c4d6d3cd932d6fd2aa48b18f2e329736236141c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_liskov, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 23 19:02:38 compute-0 systemd[1]: libpod-conmon-c5388fe87d305e1af2631bb81c4d6d3cd932d6fd2aa48b18f2e329736236141c.scope: Deactivated successfully.
Jan 23 19:02:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 37 KiB/s wr, 5 op/s
Jan 23 19:02:38 compute-0 sudo[246708]: pam_unix(sudo:session): session closed for user root
Jan 23 19:02:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:02:38 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:02:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:02:38 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:02:38 compute-0 sudo[246899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 19:02:38 compute-0 sudo[246899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:02:38 compute-0 sudo[246899]: pam_unix(sudo:session): session closed for user root
Jan 23 19:02:39 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4c4cc1cb-4d10-4014-806c-a3aa7cd153c6", "format": "json"}]: dispatch
Jan 23 19:02:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4c4cc1cb-4d10-4014-806c-a3aa7cd153c6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4c4cc1cb-4d10-4014-806c-a3aa7cd153c6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:39 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4c4cc1cb-4d10-4014-806c-a3aa7cd153c6' of type subvolume
Jan 23 19:02:39 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:39.171+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4c4cc1cb-4d10-4014-806c-a3aa7cd153c6' of type subvolume
Jan 23 19:02:39 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4c4cc1cb-4d10-4014-806c-a3aa7cd153c6", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4c4cc1cb-4d10-4014-806c-a3aa7cd153c6, vol_name:cephfs) < ""
Jan 23 19:02:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4c4cc1cb-4d10-4014-806c-a3aa7cd153c6'' moved to trashcan
Jan 23 19:02:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:02:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4c4cc1cb-4d10-4014-806c-a3aa7cd153c6, vol_name:cephfs) < ""
Jan 23 19:02:39 compute-0 ceph-mon[75097]: pgmap v971: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 37 KiB/s wr, 5 op/s
Jan 23 19:02:39 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:02:39 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:02:39 compute-0 podman[246924]: 2026-01-23 19:02:39.974683904 +0000 UTC m=+0.054891379 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "da2057f4-30b8-4287-b324-aaa510666419", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:da2057f4-30b8-4287-b324-aaa510666419, vol_name:cephfs) < ""
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659016165485247 of space, bias 1.0, pg target 0.1997704849645574 quantized to 32 (current 32)
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.506083516844193e-05 of space, bias 4.0, pg target 0.09007300220213033 quantized to 16 (current 16)
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.359070782053787e-07 of space, bias 1.0, pg target 0.0001907721234616136 quantized to 32 (current 32)
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 61 KiB/s wr, 9 op/s
Jan 23 19:02:40 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4c4cc1cb-4d10-4014-806c-a3aa7cd153c6", "format": "json"}]: dispatch
Jan 23 19:02:40 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4c4cc1cb-4d10-4014-806c-a3aa7cd153c6", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/da2057f4-30b8-4287-b324-aaa510666419/1f00d6e5-9724-4d7b-9984-7f13c821915d'.
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/da2057f4-30b8-4287-b324-aaa510666419/.meta.tmp'
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/da2057f4-30b8-4287-b324-aaa510666419/.meta.tmp' to config b'/volumes/_nogroup/da2057f4-30b8-4287-b324-aaa510666419/.meta'
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:da2057f4-30b8-4287-b324-aaa510666419, vol_name:cephfs) < ""
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "da2057f4-30b8-4287-b324-aaa510666419", "format": "json"}]: dispatch
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:da2057f4-30b8-4287-b324-aaa510666419, vol_name:cephfs) < ""
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:da2057f4-30b8-4287-b324-aaa510666419, vol_name:cephfs) < ""
Jan 23 19:02:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:02:40 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e, vol_name:cephfs) < ""
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e/f13c888d-5714-4252-924d-106f140f5711'.
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e/.meta.tmp'
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e/.meta.tmp' to config b'/volumes/_nogroup/0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e/.meta'
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e, vol_name:cephfs) < ""
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e", "format": "json"}]: dispatch
Jan 23 19:02:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e, vol_name:cephfs) < ""
Jan 23 19:02:41 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e, vol_name:cephfs) < ""
Jan 23 19:02:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:02:41 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:41 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f6c4c50e-f72c-439c-a86a-fc2726fe09af", "snap_name": "0d399fc2-1c4a-4fa4-a842-d2d9e4db8512", "format": "json"}]: dispatch
Jan 23 19:02:41 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0d399fc2-1c4a-4fa4-a842-d2d9e4db8512, sub_name:f6c4c50e-f72c-439c-a86a-fc2726fe09af, vol_name:cephfs) < ""
Jan 23 19:02:41 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0d399fc2-1c4a-4fa4-a842-d2d9e4db8512, sub_name:f6c4c50e-f72c-439c-a86a-fc2726fe09af, vol_name:cephfs) < ""
Jan 23 19:02:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:02:41 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "da2057f4-30b8-4287-b324-aaa510666419", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:41 compute-0 ceph-mon[75097]: pgmap v972: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 61 KiB/s wr, 9 op/s
Jan 23 19:02:41 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "da2057f4-30b8-4287-b324-aaa510666419", "format": "json"}]: dispatch
Jan 23 19:02:41 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:41 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:41 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e", "format": "json"}]: dispatch
Jan 23 19:02:41 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:41 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f6c4c50e-f72c-439c-a86a-fc2726fe09af", "snap_name": "0d399fc2-1c4a-4fa4-a842-d2d9e4db8512", "format": "json"}]: dispatch
Jan 23 19:02:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 46 KiB/s wr, 7 op/s
Jan 23 19:02:42 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "format": "json"}]: dispatch
Jan 23 19:02:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:42 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:42.811+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '957eb80d-b96f-4546-9b33-5d50aefacdfe' of type subvolume
Jan 23 19:02:42 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '957eb80d-b96f-4546-9b33-5d50aefacdfe' of type subvolume
Jan 23 19:02:42 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, vol_name:cephfs) < ""
Jan 23 19:02:42 compute-0 ceph-mon[75097]: pgmap v973: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 46 KiB/s wr, 7 op/s
Jan 23 19:02:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/957eb80d-b96f-4546-9b33-5d50aefacdfe'' moved to trashcan
Jan 23 19:02:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:02:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:957eb80d-b96f-4546-9b33-5d50aefacdfe, vol_name:cephfs) < ""
Jan 23 19:02:43 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "da2057f4-30b8-4287-b324-aaa510666419", "snap_name": "a35127f8-9b81-4991-98e4-157cc93eee79", "format": "json"}]: dispatch
Jan 23 19:02:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a35127f8-9b81-4991-98e4-157cc93eee79, sub_name:da2057f4-30b8-4287-b324-aaa510666419, vol_name:cephfs) < ""
Jan 23 19:02:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a35127f8-9b81-4991-98e4-157cc93eee79, sub_name:da2057f4-30b8-4287-b324-aaa510666419, vol_name:cephfs) < ""
Jan 23 19:02:43 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "format": "json"}]: dispatch
Jan 23 19:02:43 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "957eb80d-b96f-4546-9b33-5d50aefacdfe", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:43 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "da2057f4-30b8-4287-b324-aaa510666419", "snap_name": "a35127f8-9b81-4991-98e4-157cc93eee79", "format": "json"}]: dispatch
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "f6c4c50e-f72c-439c-a86a-fc2726fe09af", "snap_name": "0d399fc2-1c4a-4fa4-a842-d2d9e4db8512", "target_sub_name": "821840f9-97a9-48b2-a779-bb77ecd1f3bf", "format": "json"}]: dispatch
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:0d399fc2-1c4a-4fa4-a842-d2d9e4db8512, sub_name:f6c4c50e-f72c-439c-a86a-fc2726fe09af, target_sub_name:821840f9-97a9-48b2-a779-bb77ecd1f3bf, vol_name:cephfs) < ""
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/821840f9-97a9-48b2-a779-bb77ecd1f3bf/32292b2b-cecc-42cd-8eca-13e275c0c590'.
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/821840f9-97a9-48b2-a779-bb77ecd1f3bf/.meta.tmp'
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/821840f9-97a9-48b2-a779-bb77ecd1f3bf/.meta.tmp' to config b'/volumes/_nogroup/821840f9-97a9-48b2-a779-bb77ecd1f3bf/.meta'
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 5512661e-120d-4663-b766-a03ade1f9189 for path b'/volumes/_nogroup/821840f9-97a9-48b2-a779-bb77ecd1f3bf'
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/f6c4c50e-f72c-439c-a86a-fc2726fe09af/.meta.tmp'
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f6c4c50e-f72c-439c-a86a-fc2726fe09af/.meta.tmp' to config b'/volumes/_nogroup/f6c4c50e-f72c-439c-a86a-fc2726fe09af/.meta'
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:0d399fc2-1c4a-4fa4-a842-d2d9e4db8512, sub_name:f6c4c50e-f72c-439c-a86a-fc2726fe09af, target_sub_name:821840f9-97a9-48b2-a779-bb77ecd1f3bf, vol_name:cephfs) < ""
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "821840f9-97a9-48b2-a779-bb77ecd1f3bf", "format": "json"}]: dispatch
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:821840f9-97a9-48b2-a779-bb77ecd1f3bf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:44.143+0000 7f06ad52b640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:44.143+0000 7f06ad52b640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:44.143+0000 7f06ad52b640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:44.143+0000 7f06ad52b640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:44.143+0000 7f06ad52b640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:821840f9-97a9-48b2-a779-bb77ecd1f3bf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 76 KiB/s wr, 10 op/s
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/821840f9-97a9-48b2-a779-bb77ecd1f3bf
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 821840f9-97a9-48b2-a779-bb77ecd1f3bf)
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:44.658+0000 7f06ac529640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:44.658+0000 7f06ac529640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:44.658+0000 7f06ac529640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:44.658+0000 7f06ac529640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:44.658+0000 7f06ac529640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 821840f9-97a9-48b2-a779-bb77ecd1f3bf) -- by 0 seconds
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/821840f9-97a9-48b2-a779-bb77ecd1f3bf/.meta.tmp'
Jan 23 19:02:44 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/821840f9-97a9-48b2-a779-bb77ecd1f3bf/.meta.tmp' to config b'/volumes/_nogroup/821840f9-97a9-48b2-a779-bb77ecd1f3bf/.meta'
Jan 23 19:02:44 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "f6c4c50e-f72c-439c-a86a-fc2726fe09af", "snap_name": "0d399fc2-1c4a-4fa4-a842-d2d9e4db8512", "target_sub_name": "821840f9-97a9-48b2-a779-bb77ecd1f3bf", "format": "json"}]: dispatch
Jan 23 19:02:44 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "821840f9-97a9-48b2-a779-bb77ecd1f3bf", "format": "json"}]: dispatch
Jan 23 19:02:44 compute-0 ceph-mon[75097]: pgmap v974: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 76 KiB/s wr, 10 op/s
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:45 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:45.148+0000 7f0689215640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:45 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:45.148+0000 7f0689215640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:45 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:45.148+0000 7f0689215640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:45 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:45.148+0000 7f0689215640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:45 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:45.148+0000 7f0689215640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/f6c4c50e-f72c-439c-a86a-fc2726fe09af/.snap/0d399fc2-1c4a-4fa4-a842-d2d9e4db8512/2749c494-8646-4af9-9180-34590b901d74' to b'/volumes/_nogroup/821840f9-97a9-48b2-a779-bb77ecd1f3bf/32292b2b-cecc-42cd-8eca-13e275c0c590'
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: [progress INFO root] update: starting ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/821840f9-97a9-48b2-a779-bb77ecd1f3bf/.meta.tmp'
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/821840f9-97a9-48b2-a779-bb77ecd1f3bf/.meta.tmp' to config b'/volumes/_nogroup/821840f9-97a9-48b2-a779-bb77ecd1f3bf/.meta'
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.clone_index] untracking 5512661e-120d-4663-b766-a03ade1f9189
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f6c4c50e-f72c-439c-a86a-fc2726fe09af/.meta.tmp'
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f6c4c50e-f72c-439c-a86a-fc2726fe09af/.meta.tmp' to config b'/volumes/_nogroup/f6c4c50e-f72c-439c-a86a-fc2726fe09af/.meta'
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/821840f9-97a9-48b2-a779-bb77ecd1f3bf/.meta.tmp'
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/821840f9-97a9-48b2-a779-bb77ecd1f3bf/.meta.tmp' to config b'/volumes/_nogroup/821840f9-97a9-48b2-a779-bb77ecd1f3bf/.meta'
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 821840f9-97a9-48b2-a779-bb77ecd1f3bf)
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e", "format": "json"}]: dispatch
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e' of type subvolume
Jan 23 19:02:45 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:45.464+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e' of type subvolume
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e, vol_name:cephfs) < ""
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e'' moved to trashcan
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:02:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e, vol_name:cephfs) < ""
Jan 23 19:02:45 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e", "format": "json"}]: dispatch
Jan 23 19:02:45 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0ec0da82-b94c-4d70-ba4c-e5b9fc27dc1e", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:45 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.rszfwe(active, since 28m)
Jan 23 19:02:46 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1291e2ec-1b23-4741-b658-cb06b2d77e76", "format": "json"}]: dispatch
Jan 23 19:02:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1291e2ec-1b23-4741-b658-cb06b2d77e76, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1291e2ec-1b23-4741-b658-cb06b2d77e76, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:46 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1291e2ec-1b23-4741-b658-cb06b2d77e76' of type subvolume
Jan 23 19:02:46 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:46.142+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1291e2ec-1b23-4741-b658-cb06b2d77e76' of type subvolume
Jan 23 19:02:46 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1291e2ec-1b23-4741-b658-cb06b2d77e76", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1291e2ec-1b23-4741-b658-cb06b2d77e76, vol_name:cephfs) < ""
Jan 23 19:02:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1291e2ec-1b23-4741-b658-cb06b2d77e76'' moved to trashcan
Jan 23 19:02:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:02:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1291e2ec-1b23-4741-b658-cb06b2d77e76, vol_name:cephfs) < ""
Jan 23 19:02:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Jan 23 19:02:46 compute-0 ceph-mgr[75390]: [progress INFO root] complete: finished ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Jan 23 19:02:46 compute-0 ceph-mgr[75390]: [progress INFO root] Completed event mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%) in 1 seconds
Jan 23 19:02:46 compute-0 ceph-mgr[75390]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Jan 23 19:02:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Jan 23 19:02:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7f06b9d6fb50>
Jan 23 19:02:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:02:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 54 KiB/s wr, 8 op/s
Jan 23 19:02:46 compute-0 ceph-mon[75097]: mgrmap e12: compute-0.rszfwe(active, since 28m)
Jan 23 19:02:46 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1291e2ec-1b23-4741-b658-cb06b2d77e76", "format": "json"}]: dispatch
Jan 23 19:02:46 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1291e2ec-1b23-4741-b658-cb06b2d77e76", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:46 compute-0 ceph-mon[75097]: pgmap v975: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 54 KiB/s wr, 8 op/s
Jan 23 19:02:47 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "da2057f4-30b8-4287-b324-aaa510666419", "snap_name": "a35127f8-9b81-4991-98e4-157cc93eee79_e0e569dd-06da-47d6-9ed3-5f32e4b27bde", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:47 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a35127f8-9b81-4991-98e4-157cc93eee79_e0e569dd-06da-47d6-9ed3-5f32e4b27bde, sub_name:da2057f4-30b8-4287-b324-aaa510666419, vol_name:cephfs) < ""
Jan 23 19:02:47 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/da2057f4-30b8-4287-b324-aaa510666419/.meta.tmp'
Jan 23 19:02:47 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/da2057f4-30b8-4287-b324-aaa510666419/.meta.tmp' to config b'/volumes/_nogroup/da2057f4-30b8-4287-b324-aaa510666419/.meta'
Jan 23 19:02:47 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a35127f8-9b81-4991-98e4-157cc93eee79_e0e569dd-06da-47d6-9ed3-5f32e4b27bde, sub_name:da2057f4-30b8-4287-b324-aaa510666419, vol_name:cephfs) < ""
Jan 23 19:02:47 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "da2057f4-30b8-4287-b324-aaa510666419", "snap_name": "a35127f8-9b81-4991-98e4-157cc93eee79", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:47 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a35127f8-9b81-4991-98e4-157cc93eee79, sub_name:da2057f4-30b8-4287-b324-aaa510666419, vol_name:cephfs) < ""
Jan 23 19:02:47 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/da2057f4-30b8-4287-b324-aaa510666419/.meta.tmp'
Jan 23 19:02:47 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/da2057f4-30b8-4287-b324-aaa510666419/.meta.tmp' to config b'/volumes/_nogroup/da2057f4-30b8-4287-b324-aaa510666419/.meta'
Jan 23 19:02:47 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a35127f8-9b81-4991-98e4-157cc93eee79, sub_name:da2057f4-30b8-4287-b324-aaa510666419, vol_name:cephfs) < ""
Jan 23 19:02:47 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "da2057f4-30b8-4287-b324-aaa510666419", "snap_name": "a35127f8-9b81-4991-98e4-157cc93eee79_e0e569dd-06da-47d6-9ed3-5f32e4b27bde", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:47 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "da2057f4-30b8-4287-b324-aaa510666419", "snap_name": "a35127f8-9b81-4991-98e4-157cc93eee79", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 54 KiB/s wr, 7 op/s
Jan 23 19:02:48 compute-0 ceph-mon[75097]: pgmap v976: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 54 KiB/s wr, 7 op/s
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "27d42cf3-c581-4351-85ac-a91e5a37a2c7", "auth_id": "admin", "format": "json"}]: dispatch
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:27d42cf3-c581-4351-85ac-a91e5a37a2c7, vol_name:cephfs) < ""
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:27d42cf3-c581-4351-85ac-a91e5a37a2c7, vol_name:cephfs) < ""
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Jan 23 19:02:50 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:50.171+0000 7f06a7d20640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "27d42cf3-c581-4351-85ac-a91e5a37a2c7", "format": "json"}]: dispatch
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:27d42cf3-c581-4351-85ac-a91e5a37a2c7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:27d42cf3-c581-4351-85ac-a91e5a37a2c7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:50 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:50.303+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '27d42cf3-c581-4351-85ac-a91e5a37a2c7' of type subvolume
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '27d42cf3-c581-4351-85ac-a91e5a37a2c7' of type subvolume
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "27d42cf3-c581-4351-85ac-a91e5a37a2c7", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:27d42cf3-c581-4351-85ac-a91e5a37a2c7, vol_name:cephfs) < ""
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/27d42cf3-c581-4351-85ac-a91e5a37a2c7'' moved to trashcan
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:27d42cf3-c581-4351-85ac-a91e5a37a2c7, vol_name:cephfs) < ""
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a9ac4747-bff1-455b-923a-8ae6bf6a959a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a9ac4747-bff1-455b-923a-8ae6bf6a959a, vol_name:cephfs) < ""
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 93 KiB/s wr, 15 op/s
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a9ac4747-bff1-455b-923a-8ae6bf6a959a/f42dbae3-77db-4923-a7b0-e305383e81b8'.
Jan 23 19:02:50 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "27d42cf3-c581-4351-85ac-a91e5a37a2c7", "auth_id": "admin", "format": "json"}]: dispatch
Jan 23 19:02:50 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "27d42cf3-c581-4351-85ac-a91e5a37a2c7", "format": "json"}]: dispatch
Jan 23 19:02:50 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "27d42cf3-c581-4351-85ac-a91e5a37a2c7", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:50 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a9ac4747-bff1-455b-923a-8ae6bf6a959a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:02:50 compute-0 ceph-mon[75097]: pgmap v977: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 93 KiB/s wr, 15 op/s
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a9ac4747-bff1-455b-923a-8ae6bf6a959a/.meta.tmp'
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a9ac4747-bff1-455b-923a-8ae6bf6a959a/.meta.tmp' to config b'/volumes/_nogroup/a9ac4747-bff1-455b-923a-8ae6bf6a959a/.meta'
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a9ac4747-bff1-455b-923a-8ae6bf6a959a, vol_name:cephfs) < ""
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a9ac4747-bff1-455b-923a-8ae6bf6a959a", "format": "json"}]: dispatch
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a9ac4747-bff1-455b-923a-8ae6bf6a959a, vol_name:cephfs) < ""
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a9ac4747-bff1-455b-923a-8ae6bf6a959a, vol_name:cephfs) < ""
Jan 23 19:02:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:02:50 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "da2057f4-30b8-4287-b324-aaa510666419", "format": "json"}]: dispatch
Jan 23 19:02:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:da2057f4-30b8-4287-b324-aaa510666419, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:da2057f4-30b8-4287-b324-aaa510666419, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:51 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:51.000+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'da2057f4-30b8-4287-b324-aaa510666419' of type subvolume
Jan 23 19:02:51 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'da2057f4-30b8-4287-b324-aaa510666419' of type subvolume
Jan 23 19:02:51 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "da2057f4-30b8-4287-b324-aaa510666419", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:da2057f4-30b8-4287-b324-aaa510666419, vol_name:cephfs) < ""
Jan 23 19:02:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/da2057f4-30b8-4287-b324-aaa510666419'' moved to trashcan
Jan 23 19:02:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:02:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:da2057f4-30b8-4287-b324-aaa510666419, vol_name:cephfs) < ""
Jan 23 19:02:51 compute-0 ceph-mgr[75390]: [progress INFO root] Writing back 17 completed events
Jan 23 19:02:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 19:02:51 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:02:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:02:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 23 19:02:51 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a9ac4747-bff1-455b-923a-8ae6bf6a959a", "format": "json"}]: dispatch
Jan 23 19:02:51 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:51 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "da2057f4-30b8-4287-b324-aaa510666419", "format": "json"}]: dispatch
Jan 23 19:02:51 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "da2057f4-30b8-4287-b324-aaa510666419", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:51 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:02:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 23 19:02:51 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 23 19:02:51 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "821840f9-97a9-48b2-a779-bb77ecd1f3bf", "format": "json"}]: dispatch
Jan 23 19:02:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:821840f9-97a9-48b2-a779-bb77ecd1f3bf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 83 KiB/s wr, 14 op/s
Jan 23 19:02:52 compute-0 ceph-mon[75097]: osdmap e137: 3 total, 3 up, 3 in
Jan 23 19:02:52 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "821840f9-97a9-48b2-a779-bb77ecd1f3bf", "format": "json"}]: dispatch
Jan 23 19:02:52 compute-0 ceph-mon[75097]: pgmap v979: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 83 KiB/s wr, 14 op/s
Jan 23 19:02:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 79 KiB/s wr, 12 op/s
Jan 23 19:02:54 compute-0 ceph-mon[75097]: pgmap v980: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 79 KiB/s wr, 12 op/s
Jan 23 19:02:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:821840f9-97a9-48b2-a779-bb77ecd1f3bf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "821840f9-97a9-48b2-a779-bb77ecd1f3bf", "format": "json"}]: dispatch
Jan 23 19:02:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:821840f9-97a9-48b2-a779-bb77ecd1f3bf, vol_name:cephfs) < ""
Jan 23 19:02:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:821840f9-97a9-48b2-a779-bb77ecd1f3bf, vol_name:cephfs) < ""
Jan 23 19:02:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:02:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:56 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:02:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:02:56 compute-0 nova_compute[240119]: 2026-01-23 19:02:56.356 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:02:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 79 KiB/s wr, 12 op/s
Jan 23 19:02:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a9ac4747-bff1-455b-923a-8ae6bf6a959a", "format": "json"}]: dispatch
Jan 23 19:02:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a9ac4747-bff1-455b-923a-8ae6bf6a959a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a9ac4747-bff1-455b-923a-8ae6bf6a959a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:02:56 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:02:56.610+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a9ac4747-bff1-455b-923a-8ae6bf6a959a' of type subvolume
Jan 23 19:02:56 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a9ac4747-bff1-455b-923a-8ae6bf6a959a' of type subvolume
Jan 23 19:02:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a9ac4747-bff1-455b-923a-8ae6bf6a959a", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a9ac4747-bff1-455b-923a-8ae6bf6a959a, vol_name:cephfs) < ""
Jan 23 19:02:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a9ac4747-bff1-455b-923a-8ae6bf6a959a'' moved to trashcan
Jan 23 19:02:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:02:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a9ac4747-bff1-455b-923a-8ae6bf6a959a, vol_name:cephfs) < ""
Jan 23 19:02:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:02:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2806887389' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:02:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:02:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2806887389' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:02:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "821840f9-97a9-48b2-a779-bb77ecd1f3bf", "format": "json"}]: dispatch
Jan 23 19:02:57 compute-0 ceph-mon[75097]: pgmap v981: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 79 KiB/s wr, 12 op/s
Jan 23 19:02:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/2806887389' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:02:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/2806887389' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:02:57 compute-0 nova_compute[240119]: 2026-01-23 19:02:57.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:02:57 compute-0 nova_compute[240119]: 2026-01-23 19:02:57.378 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:02:57 compute-0 nova_compute[240119]: 2026-01-23 19:02:57.379 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:02:57 compute-0 nova_compute[240119]: 2026-01-23 19:02:57.379 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:02:57 compute-0 nova_compute[240119]: 2026-01-23 19:02:57.379 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:02:57 compute-0 nova_compute[240119]: 2026-01-23 19:02:57.380 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:02:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:02:57 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1185175105' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:02:57 compute-0 nova_compute[240119]: 2026-01-23 19:02:57.947 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:02:58 compute-0 nova_compute[240119]: 2026-01-23 19:02:58.091 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:02:58 compute-0 nova_compute[240119]: 2026-01-23 19:02:58.092 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5119MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:02:58 compute-0 nova_compute[240119]: 2026-01-23 19:02:58.093 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:02:58 compute-0 nova_compute[240119]: 2026-01-23 19:02:58.093 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:02:58 compute-0 nova_compute[240119]: 2026-01-23 19:02:58.186 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:02:58 compute-0 nova_compute[240119]: 2026-01-23 19:02:58.186 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:02:58 compute-0 nova_compute[240119]: 2026-01-23 19:02:58.211 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:02:58 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a9ac4747-bff1-455b-923a-8ae6bf6a959a", "format": "json"}]: dispatch
Jan 23 19:02:58 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a9ac4747-bff1-455b-923a-8ae6bf6a959a", "force": true, "format": "json"}]: dispatch
Jan 23 19:02:58 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1185175105' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:02:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 79 KiB/s wr, 12 op/s
Jan 23 19:02:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:02:58 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3786733380' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:02:58 compute-0 nova_compute[240119]: 2026-01-23 19:02:58.790 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:02:58 compute-0 nova_compute[240119]: 2026-01-23 19:02:58.796 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:02:58 compute-0 nova_compute[240119]: 2026-01-23 19:02:58.918 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:02:58 compute-0 nova_compute[240119]: 2026-01-23 19:02:58.921 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:02:58 compute-0 nova_compute[240119]: 2026-01-23 19:02:58.921 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:02:59 compute-0 ceph-mon[75097]: pgmap v982: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 79 KiB/s wr, 12 op/s
Jan 23 19:02:59 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3786733380' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:02:59 compute-0 nova_compute[240119]: 2026-01-23 19:02:59.922 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:02:59 compute-0 nova_compute[240119]: 2026-01-23 19:02:59.922 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:03:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:03:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:03:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:03:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:03:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:03:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:03:00 compute-0 nova_compute[240119]: 2026-01-23 19:03:00.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:03:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 43 KiB/s wr, 5 op/s
Jan 23 19:03:00 compute-0 ceph-mon[75097]: pgmap v983: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 43 KiB/s wr, 5 op/s
Jan 23 19:03:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:03:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 23 19:03:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 23 19:03:01 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 23 19:03:01 compute-0 nova_compute[240119]: 2026-01-23 19:03:01.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:03:01 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "821840f9-97a9-48b2-a779-bb77ecd1f3bf", "format": "json"}]: dispatch
Jan 23 19:03:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:821840f9-97a9-48b2-a779-bb77ecd1f3bf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:03:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:821840f9-97a9-48b2-a779-bb77ecd1f3bf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:03:01 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "821840f9-97a9-48b2-a779-bb77ecd1f3bf", "force": true, "format": "json"}]: dispatch
Jan 23 19:03:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:821840f9-97a9-48b2-a779-bb77ecd1f3bf, vol_name:cephfs) < ""
Jan 23 19:03:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/821840f9-97a9-48b2-a779-bb77ecd1f3bf'' moved to trashcan
Jan 23 19:03:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:03:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:821840f9-97a9-48b2-a779-bb77ecd1f3bf, vol_name:cephfs) < ""
Jan 23 19:03:01 compute-0 podman[247024]: 2026-01-23 19:03:01.986129353 +0000 UTC m=+0.071683258 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 19:03:02 compute-0 ceph-mon[75097]: osdmap e138: 3 total, 3 up, 3 in
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "26b68481-1e3f-49e4-928e-ce608e53fe50", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:26b68481-1e3f-49e4-928e-ce608e53fe50, vol_name:cephfs) < ""
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/26b68481-1e3f-49e4-928e-ce608e53fe50/906a1cf5-1886-4fbc-8409-b6578de293bc'.
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/26b68481-1e3f-49e4-928e-ce608e53fe50/.meta.tmp'
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/26b68481-1e3f-49e4-928e-ce608e53fe50/.meta.tmp' to config b'/volumes/_nogroup/26b68481-1e3f-49e4-928e-ce608e53fe50/.meta'
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:26b68481-1e3f-49e4-928e-ce608e53fe50, vol_name:cephfs) < ""
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "26b68481-1e3f-49e4-928e-ce608e53fe50", "format": "json"}]: dispatch
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:26b68481-1e3f-49e4-928e-ce608e53fe50, vol_name:cephfs) < ""
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:26b68481-1e3f-49e4-928e-ce608e53fe50, vol_name:cephfs) < ""
Jan 23 19:03:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:03:02 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 43 KiB/s wr, 5 op/s
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "638d8bf8-4cef-4bb0-add8-32a6d6937431", "format": "json"}]: dispatch
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:638d8bf8-4cef-4bb0-add8-32a6d6937431, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:638d8bf8-4cef-4bb0-add8-32a6d6937431, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:03:02 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:03:02.602+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '638d8bf8-4cef-4bb0-add8-32a6d6937431' of type subvolume
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '638d8bf8-4cef-4bb0-add8-32a6d6937431' of type subvolume
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "638d8bf8-4cef-4bb0-add8-32a6d6937431", "force": true, "format": "json"}]: dispatch
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:638d8bf8-4cef-4bb0-add8-32a6d6937431, vol_name:cephfs) < ""
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/638d8bf8-4cef-4bb0-add8-32a6d6937431'' moved to trashcan
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:03:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:638d8bf8-4cef-4bb0-add8-32a6d6937431, vol_name:cephfs) < ""
Jan 23 19:03:03 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "821840f9-97a9-48b2-a779-bb77ecd1f3bf", "format": "json"}]: dispatch
Jan 23 19:03:03 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "821840f9-97a9-48b2-a779-bb77ecd1f3bf", "force": true, "format": "json"}]: dispatch
Jan 23 19:03:03 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "26b68481-1e3f-49e4-928e-ce608e53fe50", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:03:03 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "26b68481-1e3f-49e4-928e-ce608e53fe50", "format": "json"}]: dispatch
Jan 23 19:03:03 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:03:03 compute-0 ceph-mon[75097]: pgmap v985: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 43 KiB/s wr, 5 op/s
Jan 23 19:03:03 compute-0 nova_compute[240119]: 2026-01-23 19:03:03.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:03:03 compute-0 nova_compute[240119]: 2026-01-23 19:03:03.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:03:03 compute-0 nova_compute[240119]: 2026-01-23 19:03:03.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:03:03 compute-0 nova_compute[240119]: 2026-01-23 19:03:03.366 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:03:03 compute-0 nova_compute[240119]: 2026-01-23 19:03:03.366 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:03:03 compute-0 nova_compute[240119]: 2026-01-23 19:03:03.366 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:03:03 compute-0 nova_compute[240119]: 2026-01-23 19:03:03.367 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:03:04 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "638d8bf8-4cef-4bb0-add8-32a6d6937431", "format": "json"}]: dispatch
Jan 23 19:03:04 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "638d8bf8-4cef-4bb0-add8-32a6d6937431", "force": true, "format": "json"}]: dispatch
Jan 23 19:03:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 35 KiB/s wr, 5 op/s
Jan 23 19:03:05 compute-0 ceph-mon[75097]: pgmap v986: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 35 KiB/s wr, 5 op/s
Jan 23 19:03:05 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f6c4c50e-f72c-439c-a86a-fc2726fe09af", "snap_name": "0d399fc2-1c4a-4fa4-a842-d2d9e4db8512_22f9ca4d-0f2d-43f9-b7c2-39a93632a737", "force": true, "format": "json"}]: dispatch
Jan 23 19:03:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0d399fc2-1c4a-4fa4-a842-d2d9e4db8512_22f9ca4d-0f2d-43f9-b7c2-39a93632a737, sub_name:f6c4c50e-f72c-439c-a86a-fc2726fe09af, vol_name:cephfs) < ""
Jan 23 19:03:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f6c4c50e-f72c-439c-a86a-fc2726fe09af/.meta.tmp'
Jan 23 19:03:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f6c4c50e-f72c-439c-a86a-fc2726fe09af/.meta.tmp' to config b'/volumes/_nogroup/f6c4c50e-f72c-439c-a86a-fc2726fe09af/.meta'
Jan 23 19:03:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0d399fc2-1c4a-4fa4-a842-d2d9e4db8512_22f9ca4d-0f2d-43f9-b7c2-39a93632a737, sub_name:f6c4c50e-f72c-439c-a86a-fc2726fe09af, vol_name:cephfs) < ""
Jan 23 19:03:05 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f6c4c50e-f72c-439c-a86a-fc2726fe09af", "snap_name": "0d399fc2-1c4a-4fa4-a842-d2d9e4db8512", "force": true, "format": "json"}]: dispatch
Jan 23 19:03:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0d399fc2-1c4a-4fa4-a842-d2d9e4db8512, sub_name:f6c4c50e-f72c-439c-a86a-fc2726fe09af, vol_name:cephfs) < ""
Jan 23 19:03:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f6c4c50e-f72c-439c-a86a-fc2726fe09af/.meta.tmp'
Jan 23 19:03:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f6c4c50e-f72c-439c-a86a-fc2726fe09af/.meta.tmp' to config b'/volumes/_nogroup/f6c4c50e-f72c-439c-a86a-fc2726fe09af/.meta'
Jan 23 19:03:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0d399fc2-1c4a-4fa4-a842-d2d9e4db8512, sub_name:f6c4c50e-f72c-439c-a86a-fc2726fe09af, vol_name:cephfs) < ""
Jan 23 19:03:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:03:06 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f6c4c50e-f72c-439c-a86a-fc2726fe09af", "snap_name": "0d399fc2-1c4a-4fa4-a842-d2d9e4db8512_22f9ca4d-0f2d-43f9-b7c2-39a93632a737", "force": true, "format": "json"}]: dispatch
Jan 23 19:03:06 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f6c4c50e-f72c-439c-a86a-fc2726fe09af", "snap_name": "0d399fc2-1c4a-4fa4-a842-d2d9e4db8512", "force": true, "format": "json"}]: dispatch
Jan 23 19:03:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 35 KiB/s wr, 5 op/s
Jan 23 19:03:06 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "26b68481-1e3f-49e4-928e-ce608e53fe50", "snap_name": "749ef186-fa39-4590-aae7-f85c11afac2a", "format": "json"}]: dispatch
Jan 23 19:03:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:749ef186-fa39-4590-aae7-f85c11afac2a, sub_name:26b68481-1e3f-49e4-928e-ce608e53fe50, vol_name:cephfs) < ""
Jan 23 19:03:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:749ef186-fa39-4590-aae7-f85c11afac2a, sub_name:26b68481-1e3f-49e4-928e-ce608e53fe50, vol_name:cephfs) < ""
Jan 23 19:03:07 compute-0 ceph-mon[75097]: pgmap v987: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 35 KiB/s wr, 5 op/s
Jan 23 19:03:08 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "26b68481-1e3f-49e4-928e-ce608e53fe50", "snap_name": "749ef186-fa39-4590-aae7-f85c11afac2a", "format": "json"}]: dispatch
Jan 23 19:03:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 35 KiB/s wr, 5 op/s
Jan 23 19:03:08 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f6c4c50e-f72c-439c-a86a-fc2726fe09af", "format": "json"}]: dispatch
Jan 23 19:03:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f6c4c50e-f72c-439c-a86a-fc2726fe09af, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:03:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f6c4c50e-f72c-439c-a86a-fc2726fe09af, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:03:08 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:03:08.891+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f6c4c50e-f72c-439c-a86a-fc2726fe09af' of type subvolume
Jan 23 19:03:08 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f6c4c50e-f72c-439c-a86a-fc2726fe09af' of type subvolume
Jan 23 19:03:08 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f6c4c50e-f72c-439c-a86a-fc2726fe09af", "force": true, "format": "json"}]: dispatch
Jan 23 19:03:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f6c4c50e-f72c-439c-a86a-fc2726fe09af, vol_name:cephfs) < ""
Jan 23 19:03:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f6c4c50e-f72c-439c-a86a-fc2726fe09af'' moved to trashcan
Jan 23 19:03:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:03:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f6c4c50e-f72c-439c-a86a-fc2726fe09af, vol_name:cephfs) < ""
Jan 23 19:03:09 compute-0 ceph-mon[75097]: pgmap v988: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 35 KiB/s wr, 5 op/s
Jan 23 19:03:10 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f6c4c50e-f72c-439c-a86a-fc2726fe09af", "format": "json"}]: dispatch
Jan 23 19:03:10 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f6c4c50e-f72c-439c-a86a-fc2726fe09af", "force": true, "format": "json"}]: dispatch
Jan 23 19:03:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 51 KiB/s wr, 6 op/s
Jan 23 19:03:10 compute-0 podman[247051]: 2026-01-23 19:03:10.991528947 +0000 UTC m=+0.059106301 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "26b68481-1e3f-49e4-928e-ce608e53fe50", "snap_name": "749ef186-fa39-4590-aae7-f85c11afac2a_de85100c-6917-4012-99ca-41973a562a73", "force": true, "format": "json"}]: dispatch
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:749ef186-fa39-4590-aae7-f85c11afac2a_de85100c-6917-4012-99ca-41973a562a73, sub_name:26b68481-1e3f-49e4-928e-ce608e53fe50, vol_name:cephfs) < ""
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/26b68481-1e3f-49e4-928e-ce608e53fe50/.meta.tmp'
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/26b68481-1e3f-49e4-928e-ce608e53fe50/.meta.tmp' to config b'/volumes/_nogroup/26b68481-1e3f-49e4-928e-ce608e53fe50/.meta'
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:749ef186-fa39-4590-aae7-f85c11afac2a_de85100c-6917-4012-99ca-41973a562a73, sub_name:26b68481-1e3f-49e4-928e-ce608e53fe50, vol_name:cephfs) < ""
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "26b68481-1e3f-49e4-928e-ce608e53fe50", "snap_name": "749ef186-fa39-4590-aae7-f85c11afac2a", "force": true, "format": "json"}]: dispatch
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:749ef186-fa39-4590-aae7-f85c11afac2a, sub_name:26b68481-1e3f-49e4-928e-ce608e53fe50, vol_name:cephfs) < ""
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/26b68481-1e3f-49e4-928e-ce608e53fe50/.meta.tmp'
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/26b68481-1e3f-49e4-928e-ce608e53fe50/.meta.tmp' to config b'/volumes/_nogroup/26b68481-1e3f-49e4-928e-ce608e53fe50/.meta'
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:749ef186-fa39-4590-aae7-f85c11afac2a, sub_name:26b68481-1e3f-49e4-928e-ce608e53fe50, vol_name:cephfs) < ""
Jan 23 19:03:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:03:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 23 19:03:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 23 19:03:11 compute-0 ceph-mon[75097]: pgmap v989: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 51 KiB/s wr, 6 op/s
Jan 23 19:03:11 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "26b68481-1e3f-49e4-928e-ce608e53fe50", "format": "json"}]: dispatch
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:26b68481-1e3f-49e4-928e-ce608e53fe50, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:26b68481-1e3f-49e4-928e-ce608e53fe50, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:03:11 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:03:11.963+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '26b68481-1e3f-49e4-928e-ce608e53fe50' of type subvolume
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '26b68481-1e3f-49e4-928e-ce608e53fe50' of type subvolume
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "26b68481-1e3f-49e4-928e-ce608e53fe50", "force": true, "format": "json"}]: dispatch
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:26b68481-1e3f-49e4-928e-ce608e53fe50, vol_name:cephfs) < ""
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/26b68481-1e3f-49e4-928e-ce608e53fe50'' moved to trashcan
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:03:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:26b68481-1e3f-49e4-928e-ce608e53fe50, vol_name:cephfs) < ""
Jan 23 19:03:12 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "26b68481-1e3f-49e4-928e-ce608e53fe50", "snap_name": "749ef186-fa39-4590-aae7-f85c11afac2a_de85100c-6917-4012-99ca-41973a562a73", "force": true, "format": "json"}]: dispatch
Jan 23 19:03:12 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "26b68481-1e3f-49e4-928e-ce608e53fe50", "snap_name": "749ef186-fa39-4590-aae7-f85c11afac2a", "force": true, "format": "json"}]: dispatch
Jan 23 19:03:12 compute-0 ceph-mon[75097]: osdmap e139: 3 total, 3 up, 3 in
Jan 23 19:03:12 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 19:03:12 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 19:03:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 51 KiB/s wr, 6 op/s
Jan 23 19:03:13 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "26b68481-1e3f-49e4-928e-ce608e53fe50", "format": "json"}]: dispatch
Jan 23 19:03:13 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "26b68481-1e3f-49e4-928e-ce608e53fe50", "force": true, "format": "json"}]: dispatch
Jan 23 19:03:13 compute-0 ceph-mon[75097]: pgmap v991: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 51 KiB/s wr, 6 op/s
Jan 23 19:03:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:03:13.583 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:03:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:03:13.584 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:03:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:03:13.584 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:03:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 48 KiB/s wr, 8 op/s
Jan 23 19:03:14 compute-0 ceph-mon[75097]: pgmap v992: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 48 KiB/s wr, 8 op/s
Jan 23 19:03:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 23 19:03:16 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:03:16.150 155616 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:3d:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:eb:2e:86:1a:4d'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 19:03:16 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:03:16.151 155616 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 19:03:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 23 19:03:16 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 23 19:03:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:03:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 60 KiB/s wr, 10 op/s
Jan 23 19:03:17 compute-0 ceph-mon[75097]: osdmap e140: 3 total, 3 up, 3 in
Jan 23 19:03:17 compute-0 ceph-mon[75097]: pgmap v994: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 60 KiB/s wr, 10 op/s
Jan 23 19:03:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 26 KiB/s wr, 5 op/s
Jan 23 19:03:19 compute-0 ceph-mon[75097]: pgmap v995: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 26 KiB/s wr, 5 op/s
Jan 23 19:03:19 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:03:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:20 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363'.
Jan 23 19:03:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/.meta.tmp'
Jan 23 19:03:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/.meta.tmp' to config b'/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/.meta'
Jan 23 19:03:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:20 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "format": "json"}]: dispatch
Jan 23 19:03:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:03:20 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:03:20 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:03:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 674 B/s rd, 31 KiB/s wr, 6 op/s
Jan 23 19:03:21 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:03:21.152 155616 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d273cd83-f2f8-4b91-b18f-957051ff2a6c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 19:03:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:03:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 23 19:03:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 23 19:03:21 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 23 19:03:21 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:03:21 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "format": "json"}]: dispatch
Jan 23 19:03:21 compute-0 ceph-mon[75097]: pgmap v996: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 674 B/s rd, 31 KiB/s wr, 6 op/s
Jan 23 19:03:22 compute-0 ceph-mon[75097]: osdmap e141: 3 total, 3 up, 3 in
Jan 23 19:03:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s wr, 1 op/s
Jan 23 19:03:23 compute-0 ceph-mon[75097]: pgmap v998: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s wr, 1 op/s
Jan 23 19:03:23 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:03:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:03:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 23 19:03:23 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:03:23 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:03:24 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:03:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:03:24 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:03:24 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:03:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:03:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s wr, 2 op/s
Jan 23 19:03:25 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:03:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:03:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:03:25 compute-0 ceph-mon[75097]: pgmap v999: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s wr, 2 op/s
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:03:25.514101) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195005514156, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1539, "num_deletes": 254, "total_data_size": 1995192, "memory_usage": 2031168, "flush_reason": "Manual Compaction"}
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195005528827, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1972515, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19820, "largest_seqno": 21358, "table_properties": {"data_size": 1965200, "index_size": 4133, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17781, "raw_average_key_size": 21, "raw_value_size": 1949535, "raw_average_value_size": 2323, "num_data_blocks": 186, "num_entries": 839, "num_filter_entries": 839, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769194902, "oldest_key_time": 1769194902, "file_creation_time": 1769195005, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 14764 microseconds, and 5998 cpu microseconds.
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:03:25.528867) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1972515 bytes OK
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:03:25.528889) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:03:25.532035) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:03:25.532060) EVENT_LOG_v1 {"time_micros": 1769195005532054, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:03:25.532073) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1987969, prev total WAL file size 2014817, number of live WAL files 2.
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:03:25.532725) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1926KB)], [47(7425KB)]
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195005532765, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9576679, "oldest_snapshot_seqno": -1}
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4647 keys, 7827475 bytes, temperature: kUnknown
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195005596707, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7827475, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7795309, "index_size": 19448, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11653, "raw_key_size": 115470, "raw_average_key_size": 24, "raw_value_size": 7710320, "raw_average_value_size": 1659, "num_data_blocks": 810, "num_entries": 4647, "num_filter_entries": 4647, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 1769195005, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:03:25.596980) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7827475 bytes
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:03:25.598627) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.5 rd, 122.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.3 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(8.8) write-amplify(4.0) OK, records in: 5176, records dropped: 529 output_compression: NoCompression
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:03:25.598646) EVENT_LOG_v1 {"time_micros": 1769195005598637, "job": 24, "event": "compaction_finished", "compaction_time_micros": 64044, "compaction_time_cpu_micros": 20593, "output_level": 6, "num_output_files": 1, "total_output_size": 7827475, "num_input_records": 5176, "num_output_records": 4647, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195005599121, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195005600544, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:03:25.532668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:03:25.600620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:03:25.600625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:03:25.600627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:03:25.600629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:03:25 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:03:25.600631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:03:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:03:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 23 19:03:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 23 19:03:26 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 23 19:03:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s wr, 3 op/s
Jan 23 19:03:27 compute-0 ceph-mon[75097]: osdmap e142: 3 total, 3 up, 3 in
Jan 23 19:03:27 compute-0 ceph-mon[75097]: pgmap v1001: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s wr, 3 op/s
Jan 23 19:03:27 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:03:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 23 19:03:27 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:03:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 23 19:03:27 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 23 19:03:27 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 23 19:03:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:27 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:03:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:03:27 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:03:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:03:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:03:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 23 19:03:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 23 19:03:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s wr, 1 op/s
Jan 23 19:03:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:03:28
Jan 23 19:03:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:03:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:03:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['volumes', 'images', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.log', '.mgr']
Jan 23 19:03:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:03:29 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:03:29 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:03:29 compute-0 ceph-mon[75097]: pgmap v1002: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s wr, 1 op/s
Jan 23 19:03:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:03:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:03:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:03:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:03:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:03:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:03:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s wr, 4 op/s
Jan 23 19:03:30 compute-0 ceph-mon[75097]: pgmap v1003: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s wr, 4 op/s
Jan 23 19:03:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:03:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:03:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:03:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:03:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:03:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:03:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:03:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:03:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:03:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:03:31 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:03:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:03:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 23 19:03:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:03:31 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:03:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:03:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:03:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:03:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:03:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:03:31 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:03:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:03:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:03:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:03:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s wr, 4 op/s
Jan 23 19:03:32 compute-0 ceph-mon[75097]: pgmap v1004: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s wr, 4 op/s
Jan 23 19:03:33 compute-0 podman[247074]: 2026-01-23 19:03:33.017478214 +0000 UTC m=+0.096274780 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Jan 23 19:03:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 49 KiB/s wr, 5 op/s
Jan 23 19:03:34 compute-0 ceph-mon[75097]: pgmap v1005: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 49 KiB/s wr, 5 op/s
Jan 23 19:03:34 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:03:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 23 19:03:34 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:03:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 23 19:03:34 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 23 19:03:34 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 23 19:03:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:34 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:03:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:03:34 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:03:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:03:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:35 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:03:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:03:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 23 19:03:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 23 19:03:35 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:03:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:03:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 101 B/s rd, 48 KiB/s wr, 5 op/s
Jan 23 19:03:36 compute-0 ceph-mon[75097]: pgmap v1006: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 101 B/s rd, 48 KiB/s wr, 5 op/s
Jan 23 19:03:38 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:03:38 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:03:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 23 19:03:38 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:03:38 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice_bob with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:03:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 41 KiB/s wr, 4 op/s
Jan 23 19:03:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:03:38 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:03:38 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:03:38 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:03:38 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:03:38 compute-0 sudo[247101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:03:38 compute-0 sudo[247101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:03:38 compute-0 sudo[247101]: pam_unix(sudo:session): session closed for user root
Jan 23 19:03:38 compute-0 sudo[247126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 19:03:38 compute-0 sudo[247126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:03:39 compute-0 sudo[247126]: pam_unix(sudo:session): session closed for user root
Jan 23 19:03:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:03:39 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:03:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 19:03:39 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:03:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 19:03:39 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:03:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 19:03:39 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:03:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 19:03:39 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:03:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:03:39 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:03:39 compute-0 sudo[247182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:03:39 compute-0 sudo[247182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:03:39 compute-0 sudo[247182]: pam_unix(sudo:session): session closed for user root
Jan 23 19:03:39 compute-0 sudo[247207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 19:03:39 compute-0 sudo[247207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:03:39 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:03:39 compute-0 ceph-mon[75097]: pgmap v1007: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 41 KiB/s wr, 4 op/s
Jan 23 19:03:39 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:03:39 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:03:39 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:03:39 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:03:39 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:03:39 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:03:39 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:03:39 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:03:39 compute-0 podman[247243]: 2026-01-23 19:03:39.851686416 +0000 UTC m=+0.087665078 container create fb4e89c3e788cf626af3217293738fa1f6aafa6f79117c02bb69f20728bff75b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_sinoussi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:03:39 compute-0 podman[247243]: 2026-01-23 19:03:39.789176494 +0000 UTC m=+0.025155166 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:03:39 compute-0 systemd[1]: Started libpod-conmon-fb4e89c3e788cf626af3217293738fa1f6aafa6f79117c02bb69f20728bff75b.scope.
Jan 23 19:03:39 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:03:40 compute-0 podman[247243]: 2026-01-23 19:03:40.039112538 +0000 UTC m=+0.275091220 container init fb4e89c3e788cf626af3217293738fa1f6aafa6f79117c02bb69f20728bff75b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 23 19:03:40 compute-0 podman[247243]: 2026-01-23 19:03:40.04963044 +0000 UTC m=+0.285609102 container start fb4e89c3e788cf626af3217293738fa1f6aafa6f79117c02bb69f20728bff75b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_sinoussi, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:03:40 compute-0 jovial_sinoussi[247259]: 167 167
Jan 23 19:03:40 compute-0 systemd[1]: libpod-fb4e89c3e788cf626af3217293738fa1f6aafa6f79117c02bb69f20728bff75b.scope: Deactivated successfully.
Jan 23 19:03:40 compute-0 podman[247243]: 2026-01-23 19:03:40.062202432 +0000 UTC m=+0.298181094 container attach fb4e89c3e788cf626af3217293738fa1f6aafa6f79117c02bb69f20728bff75b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_sinoussi, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:03:40 compute-0 podman[247243]: 2026-01-23 19:03:40.063399132 +0000 UTC m=+0.299377794 container died fb4e89c3e788cf626af3217293738fa1f6aafa6f79117c02bb69f20728bff75b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_sinoussi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 23 19:03:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-582063c08539cc88f3e4670c46d7836fe0232fb1d77e0e836472a1ceae78dc5e-merged.mount: Deactivated successfully.
Jan 23 19:03:40 compute-0 podman[247243]: 2026-01-23 19:03:40.357718748 +0000 UTC m=+0.593697410 container remove fb4e89c3e788cf626af3217293738fa1f6aafa6f79117c02bb69f20728bff75b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 23 19:03:40 compute-0 systemd[1]: libpod-conmon-fb4e89c3e788cf626af3217293738fa1f6aafa6f79117c02bb69f20728bff75b.scope: Deactivated successfully.
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006658955772942907 of space, bias 1.0, pg target 0.19976867318828723 quantized to 32 (current 32)
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00011914479838860193 of space, bias 4.0, pg target 0.14297375806632232 quantized to 16 (current 16)
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:03:40 compute-0 podman[247284]: 2026-01-23 19:03:40.535664636 +0000 UTC m=+0.052327380 container create 3b5e1630d348be57a90e334b98d37a103531c227b29aadec4698ac5034c4af7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_kalam, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 68 KiB/s wr, 8 op/s
Jan 23 19:03:40 compute-0 systemd[1]: Started libpod-conmon-3b5e1630d348be57a90e334b98d37a103531c227b29aadec4698ac5034c4af7b.scope.
Jan 23 19:03:40 compute-0 podman[247284]: 2026-01-23 19:03:40.510434309 +0000 UTC m=+0.027097033 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:03:40 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:03:40 compute-0 ceph-mon[75097]: pgmap v1008: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 68 KiB/s wr, 8 op/s
Jan 23 19:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5acb561d130575b59422c4dc1d710db6fd89c81bdb09be630c2acf0abb1804a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5acb561d130575b59422c4dc1d710db6fd89c81bdb09be630c2acf0abb1804a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5acb561d130575b59422c4dc1d710db6fd89c81bdb09be630c2acf0abb1804a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5acb561d130575b59422c4dc1d710db6fd89c81bdb09be630c2acf0abb1804a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5acb561d130575b59422c4dc1d710db6fd89c81bdb09be630c2acf0abb1804a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 19:03:40 compute-0 podman[247284]: 2026-01-23 19:03:40.646162529 +0000 UTC m=+0.162825253 container init 3b5e1630d348be57a90e334b98d37a103531c227b29aadec4698ac5034c4af7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 19:03:40 compute-0 podman[247284]: 2026-01-23 19:03:40.652171168 +0000 UTC m=+0.168833872 container start 3b5e1630d348be57a90e334b98d37a103531c227b29aadec4698ac5034c4af7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_kalam, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 23 19:03:40 compute-0 podman[247284]: 2026-01-23 19:03:40.657258054 +0000 UTC m=+0.173920788 container attach 3b5e1630d348be57a90e334b98d37a103531c227b29aadec4698ac5034c4af7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_kalam, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ee5c9b95-36ca-40c5-b310-750aabee4b07", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee5c9b95-36ca-40c5-b310-750aabee4b07, vol_name:cephfs) < ""
Jan 23 19:03:40 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ee5c9b95-36ca-40c5-b310-750aabee4b07/5d542ba0-e40a-46d8-ae5d-1a05dc65bf0f'.
Jan 23 19:03:41 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee5c9b95-36ca-40c5-b310-750aabee4b07/.meta.tmp'
Jan 23 19:03:41 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee5c9b95-36ca-40c5-b310-750aabee4b07/.meta.tmp' to config b'/volumes/_nogroup/ee5c9b95-36ca-40c5-b310-750aabee4b07/.meta'
Jan 23 19:03:41 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee5c9b95-36ca-40c5-b310-750aabee4b07, vol_name:cephfs) < ""
Jan 23 19:03:41 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ee5c9b95-36ca-40c5-b310-750aabee4b07", "format": "json"}]: dispatch
Jan 23 19:03:41 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee5c9b95-36ca-40c5-b310-750aabee4b07, vol_name:cephfs) < ""
Jan 23 19:03:41 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee5c9b95-36ca-40c5-b310-750aabee4b07, vol_name:cephfs) < ""
Jan 23 19:03:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:03:41 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:03:41 compute-0 jolly_kalam[247301]: --> passed data devices: 0 physical, 3 LVM
Jan 23 19:03:41 compute-0 jolly_kalam[247301]: --> All data devices are unavailable
Jan 23 19:03:41 compute-0 systemd[1]: libpod-3b5e1630d348be57a90e334b98d37a103531c227b29aadec4698ac5034c4af7b.scope: Deactivated successfully.
Jan 23 19:03:41 compute-0 podman[247284]: 2026-01-23 19:03:41.184531794 +0000 UTC m=+0.701194508 container died 3b5e1630d348be57a90e334b98d37a103531c227b29aadec4698ac5034c4af7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 19:03:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5acb561d130575b59422c4dc1d710db6fd89c81bdb09be630c2acf0abb1804a6-merged.mount: Deactivated successfully.
Jan 23 19:03:41 compute-0 podman[247284]: 2026-01-23 19:03:41.349104909 +0000 UTC m=+0.865767613 container remove 3b5e1630d348be57a90e334b98d37a103531c227b29aadec4698ac5034c4af7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:03:41 compute-0 podman[247322]: 2026-01-23 19:03:41.35922489 +0000 UTC m=+0.136127660 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 23 19:03:41 compute-0 systemd[1]: libpod-conmon-3b5e1630d348be57a90e334b98d37a103531c227b29aadec4698ac5034c4af7b.scope: Deactivated successfully.
Jan 23 19:03:41 compute-0 sudo[247207]: pam_unix(sudo:session): session closed for user root
Jan 23 19:03:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:03:41 compute-0 sudo[247348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:03:41 compute-0 sudo[247348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:03:41 compute-0 sudo[247348]: pam_unix(sudo:session): session closed for user root
Jan 23 19:03:41 compute-0 sudo[247373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 19:03:41 compute-0 sudo[247373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:03:41 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ee5c9b95-36ca-40c5-b310-750aabee4b07", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:03:41 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ee5c9b95-36ca-40c5-b310-750aabee4b07", "format": "json"}]: dispatch
Jan 23 19:03:41 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:03:41 compute-0 podman[247410]: 2026-01-23 19:03:41.849131342 +0000 UTC m=+0.073059684 container create c23dd82f892b98fc5ef17c0f03cd0a2094b7033a9db0d2c2f8190c985e35f1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_pike, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 23 19:03:41 compute-0 systemd[1]: Started libpod-conmon-c23dd82f892b98fc5ef17c0f03cd0a2094b7033a9db0d2c2f8190c985e35f1d8.scope.
Jan 23 19:03:41 compute-0 podman[247410]: 2026-01-23 19:03:41.796844615 +0000 UTC m=+0.020772977 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:03:41 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:03:41 compute-0 podman[247410]: 2026-01-23 19:03:41.944685904 +0000 UTC m=+0.168614266 container init c23dd82f892b98fc5ef17c0f03cd0a2094b7033a9db0d2c2f8190c985e35f1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_pike, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:03:41 compute-0 podman[247410]: 2026-01-23 19:03:41.951807592 +0000 UTC m=+0.175735924 container start c23dd82f892b98fc5ef17c0f03cd0a2094b7033a9db0d2c2f8190c985e35f1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_pike, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 19:03:41 compute-0 confident_pike[247426]: 167 167
Jan 23 19:03:41 compute-0 systemd[1]: libpod-c23dd82f892b98fc5ef17c0f03cd0a2094b7033a9db0d2c2f8190c985e35f1d8.scope: Deactivated successfully.
Jan 23 19:03:41 compute-0 podman[247410]: 2026-01-23 19:03:41.974580327 +0000 UTC m=+0.198508669 container attach c23dd82f892b98fc5ef17c0f03cd0a2094b7033a9db0d2c2f8190c985e35f1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_pike, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:03:41 compute-0 podman[247410]: 2026-01-23 19:03:41.97711091 +0000 UTC m=+0.201039262 container died c23dd82f892b98fc5ef17c0f03cd0a2094b7033a9db0d2c2f8190c985e35f1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_pike, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:03:42 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:03:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f86b5a45682c4e053800845241809c53e3872923f2fc49fb3e2cc514940da7f9-merged.mount: Deactivated successfully.
Jan 23 19:03:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 23 19:03:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:03:42 compute-0 podman[247410]: 2026-01-23 19:03:42.27279878 +0000 UTC m=+0.496727142 container remove c23dd82f892b98fc5ef17c0f03cd0a2094b7033a9db0d2c2f8190c985e35f1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_pike, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 19:03:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 23 19:03:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 23 19:03:42 compute-0 systemd[1]: libpod-conmon-c23dd82f892b98fc5ef17c0f03cd0a2094b7033a9db0d2c2f8190c985e35f1d8.scope: Deactivated successfully.
Jan 23 19:03:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 23 19:03:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:42 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:03:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:03:42 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:03:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:03:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:42 compute-0 podman[247451]: 2026-01-23 19:03:42.539890741 +0000 UTC m=+0.126037220 container create c290f4aac1dc65b1c020c7d2ab66964b2b6db8df7eba0142aad2b5fd86a210c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mayer, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 19:03:42 compute-0 podman[247451]: 2026-01-23 19:03:42.450225424 +0000 UTC m=+0.036371923 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:03:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 46 KiB/s wr, 6 op/s
Jan 23 19:03:42 compute-0 systemd[1]: Started libpod-conmon-c290f4aac1dc65b1c020c7d2ab66964b2b6db8df7eba0142aad2b5fd86a210c1.scope.
Jan 23 19:03:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a392d86a2987226eac3b3b174c43ee77482ea66ec6482519f5bad8de884609b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a392d86a2987226eac3b3b174c43ee77482ea66ec6482519f5bad8de884609b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a392d86a2987226eac3b3b174c43ee77482ea66ec6482519f5bad8de884609b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a392d86a2987226eac3b3b174c43ee77482ea66ec6482519f5bad8de884609b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:03:42 compute-0 podman[247451]: 2026-01-23 19:03:42.647886492 +0000 UTC m=+0.234032981 container init c290f4aac1dc65b1c020c7d2ab66964b2b6db8df7eba0142aad2b5fd86a210c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mayer, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 19:03:42 compute-0 podman[247451]: 2026-01-23 19:03:42.654063895 +0000 UTC m=+0.240210364 container start c290f4aac1dc65b1c020c7d2ab66964b2b6db8df7eba0142aad2b5fd86a210c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 23 19:03:42 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:03:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:03:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 23 19:03:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 23 19:03:42 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:03:42 compute-0 ceph-mon[75097]: pgmap v1009: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 46 KiB/s wr, 6 op/s
Jan 23 19:03:42 compute-0 podman[247451]: 2026-01-23 19:03:42.680742347 +0000 UTC m=+0.266888836 container attach c290f4aac1dc65b1c020c7d2ab66964b2b6db8df7eba0142aad2b5fd86a210c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:03:42 compute-0 lucid_mayer[247469]: {
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:     "0": [
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:         {
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "devices": [
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "/dev/loop3"
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             ],
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "lv_name": "ceph_lv0",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "lv_size": "21470642176",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "name": "ceph_lv0",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "tags": {
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.cluster_name": "ceph",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.crush_device_class": "",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.encrypted": "0",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.objectstore": "bluestore",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.osd_id": "0",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.type": "block",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.vdo": "0",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.with_tpm": "0"
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             },
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "type": "block",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "vg_name": "ceph_vg0"
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:         }
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:     ],
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:     "1": [
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:         {
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "devices": [
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "/dev/loop4"
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             ],
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "lv_name": "ceph_lv1",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "lv_size": "21470642176",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "name": "ceph_lv1",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "tags": {
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.cluster_name": "ceph",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.crush_device_class": "",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.encrypted": "0",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.objectstore": "bluestore",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.osd_id": "1",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.type": "block",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.vdo": "0",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.with_tpm": "0"
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             },
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "type": "block",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "vg_name": "ceph_vg1"
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:         }
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:     ],
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:     "2": [
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:         {
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "devices": [
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "/dev/loop5"
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             ],
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "lv_name": "ceph_lv2",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "lv_size": "21470642176",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "name": "ceph_lv2",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "tags": {
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.cluster_name": "ceph",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.crush_device_class": "",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.encrypted": "0",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.objectstore": "bluestore",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.osd_id": "2",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.type": "block",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.vdo": "0",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:                 "ceph.with_tpm": "0"
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             },
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "type": "block",
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:             "vg_name": "ceph_vg2"
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:         }
Jan 23 19:03:42 compute-0 lucid_mayer[247469]:     ]
Jan 23 19:03:42 compute-0 lucid_mayer[247469]: }
Jan 23 19:03:42 compute-0 systemd[1]: libpod-c290f4aac1dc65b1c020c7d2ab66964b2b6db8df7eba0142aad2b5fd86a210c1.scope: Deactivated successfully.
Jan 23 19:03:42 compute-0 podman[247451]: 2026-01-23 19:03:42.957497288 +0000 UTC m=+0.543643777 container died c290f4aac1dc65b1c020c7d2ab66964b2b6db8df7eba0142aad2b5fd86a210c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:03:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a392d86a2987226eac3b3b174c43ee77482ea66ec6482519f5bad8de884609b2-merged.mount: Deactivated successfully.
Jan 23 19:03:43 compute-0 podman[247451]: 2026-01-23 19:03:43.003526021 +0000 UTC m=+0.589672490 container remove c290f4aac1dc65b1c020c7d2ab66964b2b6db8df7eba0142aad2b5fd86a210c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:03:43 compute-0 systemd[1]: libpod-conmon-c290f4aac1dc65b1c020c7d2ab66964b2b6db8df7eba0142aad2b5fd86a210c1.scope: Deactivated successfully.
Jan 23 19:03:43 compute-0 sudo[247373]: pam_unix(sudo:session): session closed for user root
Jan 23 19:03:43 compute-0 sudo[247489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:03:43 compute-0 sudo[247489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:03:43 compute-0 sudo[247489]: pam_unix(sudo:session): session closed for user root
Jan 23 19:03:43 compute-0 sudo[247514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 19:03:43 compute-0 sudo[247514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:03:43 compute-0 podman[247550]: 2026-01-23 19:03:43.458743722 +0000 UTC m=+0.042639541 container create 2664e4095a0d45812ccc74c7dee8b31d34cdf20765a428e0b6a438e3be0fac4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lehmann, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 19:03:43 compute-0 systemd[1]: Started libpod-conmon-2664e4095a0d45812ccc74c7dee8b31d34cdf20765a428e0b6a438e3be0fac4f.scope.
Jan 23 19:03:43 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:03:43 compute-0 podman[247550]: 2026-01-23 19:03:43.527731854 +0000 UTC m=+0.111627713 container init 2664e4095a0d45812ccc74c7dee8b31d34cdf20765a428e0b6a438e3be0fac4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lehmann, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 19:03:43 compute-0 podman[247550]: 2026-01-23 19:03:43.534877251 +0000 UTC m=+0.118773070 container start 2664e4095a0d45812ccc74c7dee8b31d34cdf20765a428e0b6a438e3be0fac4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lehmann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 19:03:43 compute-0 podman[247550]: 2026-01-23 19:03:43.441076903 +0000 UTC m=+0.024972742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:03:43 compute-0 podman[247550]: 2026-01-23 19:03:43.538749448 +0000 UTC m=+0.122645287 container attach 2664e4095a0d45812ccc74c7dee8b31d34cdf20765a428e0b6a438e3be0fac4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 23 19:03:43 compute-0 priceless_lehmann[247566]: 167 167
Jan 23 19:03:43 compute-0 systemd[1]: libpod-2664e4095a0d45812ccc74c7dee8b31d34cdf20765a428e0b6a438e3be0fac4f.scope: Deactivated successfully.
Jan 23 19:03:43 compute-0 podman[247550]: 2026-01-23 19:03:43.539939137 +0000 UTC m=+0.123834976 container died 2664e4095a0d45812ccc74c7dee8b31d34cdf20765a428e0b6a438e3be0fac4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 19:03:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-161915e8013fee78032491be90e32a26fc2bdc4f0fd5e426593bc8252ad021e9-merged.mount: Deactivated successfully.
Jan 23 19:03:43 compute-0 podman[247550]: 2026-01-23 19:03:43.586035302 +0000 UTC m=+0.169931121 container remove 2664e4095a0d45812ccc74c7dee8b31d34cdf20765a428e0b6a438e3be0fac4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lehmann, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:03:43 compute-0 systemd[1]: libpod-conmon-2664e4095a0d45812ccc74c7dee8b31d34cdf20765a428e0b6a438e3be0fac4f.scope: Deactivated successfully.
Jan 23 19:03:43 compute-0 podman[247590]: 2026-01-23 19:03:43.72614845 +0000 UTC m=+0.036019086 container create 000bf15dd9613e2bb75b5f7ad0a0d5deaa1aaf15da0d34df158db9cbf6d53e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_nash, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:03:43 compute-0 systemd[1]: Started libpod-conmon-000bf15dd9613e2bb75b5f7ad0a0d5deaa1aaf15da0d34df158db9cbf6d53e78.scope.
Jan 23 19:03:43 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:03:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95747c18e7d1952f6c078d1b65caaa418bba2dde68deb2691bd059de9aa77840/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:03:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95747c18e7d1952f6c078d1b65caaa418bba2dde68deb2691bd059de9aa77840/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:03:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95747c18e7d1952f6c078d1b65caaa418bba2dde68deb2691bd059de9aa77840/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:03:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95747c18e7d1952f6c078d1b65caaa418bba2dde68deb2691bd059de9aa77840/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:03:43 compute-0 podman[247590]: 2026-01-23 19:03:43.803381147 +0000 UTC m=+0.113251813 container init 000bf15dd9613e2bb75b5f7ad0a0d5deaa1aaf15da0d34df158db9cbf6d53e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:03:43 compute-0 podman[247590]: 2026-01-23 19:03:43.710386279 +0000 UTC m=+0.020256935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:03:43 compute-0 podman[247590]: 2026-01-23 19:03:43.809632252 +0000 UTC m=+0.119502888 container start 000bf15dd9613e2bb75b5f7ad0a0d5deaa1aaf15da0d34df158db9cbf6d53e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:03:43 compute-0 podman[247590]: 2026-01-23 19:03:43.814234517 +0000 UTC m=+0.124105153 container attach 000bf15dd9613e2bb75b5f7ad0a0d5deaa1aaf15da0d34df158db9cbf6d53e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_nash, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 19:03:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 65 KiB/s wr, 8 op/s
Jan 23 19:03:44 compute-0 lvm[247684]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:03:44 compute-0 lvm[247686]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:03:44 compute-0 lvm[247684]: VG ceph_vg0 finished
Jan 23 19:03:44 compute-0 lvm[247686]: VG ceph_vg1 finished
Jan 23 19:03:44 compute-0 lvm[247688]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:03:44 compute-0 lvm[247688]: VG ceph_vg2 finished
Jan 23 19:03:44 compute-0 ceph-mon[75097]: pgmap v1010: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 65 KiB/s wr, 8 op/s
Jan 23 19:03:44 compute-0 trusting_nash[247607]: {}
Jan 23 19:03:44 compute-0 systemd[1]: libpod-000bf15dd9613e2bb75b5f7ad0a0d5deaa1aaf15da0d34df158db9cbf6d53e78.scope: Deactivated successfully.
Jan 23 19:03:44 compute-0 systemd[1]: libpod-000bf15dd9613e2bb75b5f7ad0a0d5deaa1aaf15da0d34df158db9cbf6d53e78.scope: Consumed 1.386s CPU time.
Jan 23 19:03:44 compute-0 podman[247590]: 2026-01-23 19:03:44.728331669 +0000 UTC m=+1.038202325 container died 000bf15dd9613e2bb75b5f7ad0a0d5deaa1aaf15da0d34df158db9cbf6d53e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_nash, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:03:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-95747c18e7d1952f6c078d1b65caaa418bba2dde68deb2691bd059de9aa77840-merged.mount: Deactivated successfully.
Jan 23 19:03:44 compute-0 podman[247590]: 2026-01-23 19:03:44.774369191 +0000 UTC m=+1.084239827 container remove 000bf15dd9613e2bb75b5f7ad0a0d5deaa1aaf15da0d34df158db9cbf6d53e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_nash, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 19:03:44 compute-0 systemd[1]: libpod-conmon-000bf15dd9613e2bb75b5f7ad0a0d5deaa1aaf15da0d34df158db9cbf6d53e78.scope: Deactivated successfully.
Jan 23 19:03:44 compute-0 sudo[247514]: pam_unix(sudo:session): session closed for user root
Jan 23 19:03:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:03:44 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:03:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:03:44 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:03:44 compute-0 sudo[247702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 19:03:44 compute-0 sudo[247702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:03:44 compute-0 sudo[247702]: pam_unix(sudo:session): session closed for user root
Jan 23 19:03:45 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2cc197a2-1b4a-427e-9d00-dd49916b35df", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:03:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2cc197a2-1b4a-427e-9d00-dd49916b35df, vol_name:cephfs) < ""
Jan 23 19:03:45 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/2cc197a2-1b4a-427e-9d00-dd49916b35df/17390c27-2061-4f50-9c26-0635782ac042'.
Jan 23 19:03:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2cc197a2-1b4a-427e-9d00-dd49916b35df/.meta.tmp'
Jan 23 19:03:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2cc197a2-1b4a-427e-9d00-dd49916b35df/.meta.tmp' to config b'/volumes/_nogroup/2cc197a2-1b4a-427e-9d00-dd49916b35df/.meta'
Jan 23 19:03:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2cc197a2-1b4a-427e-9d00-dd49916b35df, vol_name:cephfs) < ""
Jan 23 19:03:45 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2cc197a2-1b4a-427e-9d00-dd49916b35df", "format": "json"}]: dispatch
Jan 23 19:03:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2cc197a2-1b4a-427e-9d00-dd49916b35df, vol_name:cephfs) < ""
Jan 23 19:03:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2cc197a2-1b4a-427e-9d00-dd49916b35df, vol_name:cephfs) < ""
Jan 23 19:03:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:03:45 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:03:45 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:03:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:03:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 23 19:03:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:03:45 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice_bob with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:03:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:03:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:03:45 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:03:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:03:45 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:03:45 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:03:45 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2cc197a2-1b4a-427e-9d00-dd49916b35df", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:03:45 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2cc197a2-1b4a-427e-9d00-dd49916b35df", "format": "json"}]: dispatch
Jan 23 19:03:45 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:03:45 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:03:45 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:03:45 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:03:45 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:03:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:03:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 47 KiB/s wr, 6 op/s
Jan 23 19:03:46 compute-0 ceph-mon[75097]: pgmap v1011: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 47 KiB/s wr, 6 op/s
Jan 23 19:03:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 47 KiB/s wr, 6 op/s
Jan 23 19:03:48 compute-0 ceph-mon[75097]: pgmap v1012: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 47 KiB/s wr, 6 op/s
Jan 23 19:03:49 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:03:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 23 19:03:49 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:03:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 23 19:03:49 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 23 19:03:49 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 23 19:03:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:49 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:03:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:03:49 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:03:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:03:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:49 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3f4b47db-bc72-4155-915f-95273d2ed043", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:03:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3f4b47db-bc72-4155-915f-95273d2ed043, vol_name:cephfs) < ""
Jan 23 19:03:49 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/3f4b47db-bc72-4155-915f-95273d2ed043/58324401-fe57-4f8a-a583-6442ca4cfa06'.
Jan 23 19:03:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3f4b47db-bc72-4155-915f-95273d2ed043/.meta.tmp'
Jan 23 19:03:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3f4b47db-bc72-4155-915f-95273d2ed043/.meta.tmp' to config b'/volumes/_nogroup/3f4b47db-bc72-4155-915f-95273d2ed043/.meta'
Jan 23 19:03:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3f4b47db-bc72-4155-915f-95273d2ed043, vol_name:cephfs) < ""
Jan 23 19:03:49 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3f4b47db-bc72-4155-915f-95273d2ed043", "format": "json"}]: dispatch
Jan 23 19:03:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3f4b47db-bc72-4155-915f-95273d2ed043, vol_name:cephfs) < ""
Jan 23 19:03:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3f4b47db-bc72-4155-915f-95273d2ed043, vol_name:cephfs) < ""
Jan 23 19:03:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:03:49 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:03:49 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:03:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:03:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 23 19:03:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 23 19:03:49 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:03:49 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3f4b47db-bc72-4155-915f-95273d2ed043", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:03:49 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3f4b47db-bc72-4155-915f-95273d2ed043", "format": "json"}]: dispatch
Jan 23 19:03:49 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:03:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 74 KiB/s wr, 9 op/s
Jan 23 19:03:50 compute-0 ceph-mon[75097]: pgmap v1013: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 74 KiB/s wr, 9 op/s
Jan 23 19:03:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:03:52 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:03:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, vol_name:cephfs) < ""
Jan 23 19:03:52 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e'.
Jan 23 19:03:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/.meta.tmp'
Jan 23 19:03:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/.meta.tmp' to config b'/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/.meta'
Jan 23 19:03:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, vol_name:cephfs) < ""
Jan 23 19:03:52 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "format": "json"}]: dispatch
Jan 23 19:03:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, vol_name:cephfs) < ""
Jan 23 19:03:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, vol_name:cephfs) < ""
Jan 23 19:03:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:03:52 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:03:52 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:03:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:03:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 23 19:03:52 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:03:52 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice bob with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:03:52 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:03:52 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:03:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 46 KiB/s wr, 5 op/s
Jan 23 19:03:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:03:52 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:03:52 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:03:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:03:53 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e51baba4-47d7-48aa-a6e0-ba58a86949b3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:03:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e51baba4-47d7-48aa-a6e0-ba58a86949b3, vol_name:cephfs) < ""
Jan 23 19:03:53 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/e51baba4-47d7-48aa-a6e0-ba58a86949b3/ffaa2ae9-32ce-459c-8b7f-8295bd6cd7c9'.
Jan 23 19:03:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e51baba4-47d7-48aa-a6e0-ba58a86949b3/.meta.tmp'
Jan 23 19:03:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e51baba4-47d7-48aa-a6e0-ba58a86949b3/.meta.tmp' to config b'/volumes/_nogroup/e51baba4-47d7-48aa-a6e0-ba58a86949b3/.meta'
Jan 23 19:03:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e51baba4-47d7-48aa-a6e0-ba58a86949b3, vol_name:cephfs) < ""
Jan 23 19:03:53 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e51baba4-47d7-48aa-a6e0-ba58a86949b3", "format": "json"}]: dispatch
Jan 23 19:03:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e51baba4-47d7-48aa-a6e0-ba58a86949b3, vol_name:cephfs) < ""
Jan 23 19:03:53 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e51baba4-47d7-48aa-a6e0-ba58a86949b3, vol_name:cephfs) < ""
Jan 23 19:03:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:03:53 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:03:53 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:03:53 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "format": "json"}]: dispatch
Jan 23 19:03:53 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:03:53 compute-0 ceph-mon[75097]: pgmap v1014: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 46 KiB/s wr, 5 op/s
Jan 23 19:03:53 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:03:53 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:03:53 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:03:54 compute-0 nova_compute[240119]: 2026-01-23 19:03:54.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:03:54 compute-0 nova_compute[240119]: 2026-01-23 19:03:54.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 23 19:03:54 compute-0 nova_compute[240119]: 2026-01-23 19:03:54.364 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 23 19:03:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 94 KiB/s wr, 11 op/s
Jan 23 19:03:54 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e51baba4-47d7-48aa-a6e0-ba58a86949b3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:03:54 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e51baba4-47d7-48aa-a6e0-ba58a86949b3", "format": "json"}]: dispatch
Jan 23 19:03:55 compute-0 ceph-mon[75097]: pgmap v1015: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 94 KiB/s wr, 11 op/s
Jan 23 19:03:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "auth_id": "eve49", "tenant_id": "182b21a4575a4acc866fcdd5f8991734", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:03:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, tenant_id:182b21a4575a4acc866fcdd5f8991734, vol_name:cephfs) < ""
Jan 23 19:03:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0)
Jan 23 19:03:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Jan 23 19:03:56 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID eve49 with tenant 182b21a4575a4acc866fcdd5f8991734
Jan 23 19:03:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f985db04-31d5-4dab-ade3-fdc049b4357a", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:03:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f985db04-31d5-4dab-ade3-fdc049b4357a", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:03:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f985db04-31d5-4dab-ade3-fdc049b4357a", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:03:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, tenant_id:182b21a4575a4acc866fcdd5f8991734, vol_name:cephfs) < ""
Jan 23 19:03:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:03:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 23 19:03:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:03:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 23 19:03:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 23 19:03:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 23 19:03:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:03:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:03:56 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:03:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:03:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:03:56 compute-0 nova_compute[240119]: 2026-01-23 19:03:56.358 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:03:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:03:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 75 KiB/s wr, 9 op/s
Jan 23 19:03:56 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Jan 23 19:03:56 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f985db04-31d5-4dab-ade3-fdc049b4357a", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:03:56 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f985db04-31d5-4dab-ade3-fdc049b4357a", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:03:56 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:03:56 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 23 19:03:56 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 23 19:03:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:03:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3449777732' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:03:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:03:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3449777732' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:03:57 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e51baba4-47d7-48aa-a6e0-ba58a86949b3", "format": "json"}]: dispatch
Jan 23 19:03:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e51baba4-47d7-48aa-a6e0-ba58a86949b3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:03:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e51baba4-47d7-48aa-a6e0-ba58a86949b3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:03:57 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:03:57.523+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e51baba4-47d7-48aa-a6e0-ba58a86949b3' of type subvolume
Jan 23 19:03:57 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e51baba4-47d7-48aa-a6e0-ba58a86949b3' of type subvolume
Jan 23 19:03:57 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e51baba4-47d7-48aa-a6e0-ba58a86949b3", "force": true, "format": "json"}]: dispatch
Jan 23 19:03:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e51baba4-47d7-48aa-a6e0-ba58a86949b3, vol_name:cephfs) < ""
Jan 23 19:03:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e51baba4-47d7-48aa-a6e0-ba58a86949b3'' moved to trashcan
Jan 23 19:03:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:03:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e51baba4-47d7-48aa-a6e0-ba58a86949b3, vol_name:cephfs) < ""
Jan 23 19:03:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "auth_id": "eve49", "tenant_id": "182b21a4575a4acc866fcdd5f8991734", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:03:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:03:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:03:57 compute-0 ceph-mon[75097]: pgmap v1016: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 75 KiB/s wr, 9 op/s
Jan 23 19:03:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/3449777732' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:03:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/3449777732' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:03:58 compute-0 nova_compute[240119]: 2026-01-23 19:03:58.347 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:03:58 compute-0 nova_compute[240119]: 2026-01-23 19:03:58.376 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:03:58 compute-0 nova_compute[240119]: 2026-01-23 19:03:58.376 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:03:58 compute-0 nova_compute[240119]: 2026-01-23 19:03:58.376 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:03:58 compute-0 nova_compute[240119]: 2026-01-23 19:03:58.376 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:03:58 compute-0 nova_compute[240119]: 2026-01-23 19:03:58.377 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:03:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 75 KiB/s wr, 9 op/s
Jan 23 19:03:58 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e51baba4-47d7-48aa-a6e0-ba58a86949b3", "format": "json"}]: dispatch
Jan 23 19:03:58 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e51baba4-47d7-48aa-a6e0-ba58a86949b3", "force": true, "format": "json"}]: dispatch
Jan 23 19:03:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:03:58 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4211027311' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:03:58 compute-0 nova_compute[240119]: 2026-01-23 19:03:58.950 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:03:59 compute-0 nova_compute[240119]: 2026-01-23 19:03:59.114 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:03:59 compute-0 nova_compute[240119]: 2026-01-23 19:03:59.115 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5097MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:03:59 compute-0 nova_compute[240119]: 2026-01-23 19:03:59.115 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:03:59 compute-0 nova_compute[240119]: 2026-01-23 19:03:59.116 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:03:59 compute-0 nova_compute[240119]: 2026-01-23 19:03:59.422 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:03:59 compute-0 nova_compute[240119]: 2026-01-23 19:03:59.423 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:03:59 compute-0 nova_compute[240119]: 2026-01-23 19:03:59.449 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:03:59 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "auth_id": "eve48", "tenant_id": "182b21a4575a4acc866fcdd5f8991734", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:03:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, tenant_id:182b21a4575a4acc866fcdd5f8991734, vol_name:cephfs) < ""
Jan 23 19:03:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0)
Jan 23 19:03:59 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Jan 23 19:03:59 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID eve48 with tenant 182b21a4575a4acc866fcdd5f8991734
Jan 23 19:04:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:04:00 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3486253855' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:04:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:04:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f06b9d6ff70>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f06b9d6feb0>)]
Jan 23 19:04:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 23 19:04:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:04:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f06a1c8b7c0>)]
Jan 23 19:04:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 23 19:04:00 compute-0 nova_compute[240119]: 2026-01-23 19:04:00.078 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:04:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:04:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:04:00 compute-0 nova_compute[240119]: 2026-01-23 19:04:00.085 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:04:00 compute-0 nova_compute[240119]: 2026-01-23 19:04:00.102 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:04:00 compute-0 nova_compute[240119]: 2026-01-23 19:04:00.105 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:04:00 compute-0 nova_compute[240119]: 2026-01-23 19:04:00.105 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:04:00 compute-0 nova_compute[240119]: 2026-01-23 19:04:00.106 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:04:00 compute-0 ceph-mon[75097]: pgmap v1017: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 75 KiB/s wr, 9 op/s
Jan 23 19:04:00 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4211027311' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:04:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Jan 23 19:04:00 compute-0 nova_compute[240119]: 2026-01-23 19:04:00.383 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:04:00 compute-0 nova_compute[240119]: 2026-01-23 19:04:00.383 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:04:00 compute-0 nova_compute[240119]: 2026-01-23 19:04:00.384 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:04:00 compute-0 nova_compute[240119]: 2026-01-23 19:04:00.384 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:04:00 compute-0 nova_compute[240119]: 2026-01-23 19:04:00.384 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 23 19:04:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 23 19:04:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f985db04-31d5-4dab-ade3-fdc049b4357a", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:04:00 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f985db04-31d5-4dab-ade3-fdc049b4357a", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 115 KiB/s wr, 13 op/s
Jan 23 19:04:00 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f985db04-31d5-4dab-ade3-fdc049b4357a", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:00 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, tenant_id:182b21a4575a4acc866fcdd5f8991734, vol_name:cephfs) < ""
Jan 23 19:04:00 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:04:00 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:04:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 23 19:04:00 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:04:00 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice bob with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:04:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:04:00 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:00 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:00 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:04:01 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3f4b47db-bc72-4155-915f-95273d2ed043", "format": "json"}]: dispatch
Jan 23 19:04:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3f4b47db-bc72-4155-915f-95273d2ed043, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3f4b47db-bc72-4155-915f-95273d2ed043, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:01 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:04:01.143+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3f4b47db-bc72-4155-915f-95273d2ed043' of type subvolume
Jan 23 19:04:01 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3f4b47db-bc72-4155-915f-95273d2ed043' of type subvolume
Jan 23 19:04:01 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3f4b47db-bc72-4155-915f-95273d2ed043", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3f4b47db-bc72-4155-915f-95273d2ed043, vol_name:cephfs) < ""
Jan 23 19:04:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3f4b47db-bc72-4155-915f-95273d2ed043'' moved to trashcan
Jan 23 19:04:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:04:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3f4b47db-bc72-4155-915f-95273d2ed043, vol_name:cephfs) < ""
Jan 23 19:04:01 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "auth_id": "eve48", "tenant_id": "182b21a4575a4acc866fcdd5f8991734", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:04:01 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3486253855' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:04:01 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f985db04-31d5-4dab-ade3-fdc049b4357a", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:01 compute-0 ceph-mon[75097]: pgmap v1018: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 115 KiB/s wr, 13 op/s
Jan 23 19:04:01 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f985db04-31d5-4dab-ade3-fdc049b4357a", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:01 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:04:01 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:01 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:01 compute-0 nova_compute[240119]: 2026-01-23 19:04:01.380 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:04:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:04:02 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:04:02 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3f4b47db-bc72-4155-915f-95273d2ed043", "format": "json"}]: dispatch
Jan 23 19:04:02 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3f4b47db-bc72-4155-915f-95273d2ed043", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:02 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.rszfwe(active, since 29m)
Jan 23 19:04:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 88 KiB/s wr, 9 op/s
Jan 23 19:04:03 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "auth_id": "eve48", "format": "json"}]: dispatch
Jan 23 19:04:03 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, vol_name:cephfs) < ""
Jan 23 19:04:03 compute-0 nova_compute[240119]: 2026-01-23 19:04:03.344 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:04:03 compute-0 nova_compute[240119]: 2026-01-23 19:04:03.470 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:04:03 compute-0 nova_compute[240119]: 2026-01-23 19:04:03.470 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:04:03 compute-0 ceph-mon[75097]: mgrmap e13: compute-0.rszfwe(active, since 29m)
Jan 23 19:04:03 compute-0 ceph-mon[75097]: pgmap v1019: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 88 KiB/s wr, 9 op/s
Jan 23 19:04:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0)
Jan 23 19:04:03 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Jan 23 19:04:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0)
Jan 23 19:04:03 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Jan 23 19:04:03 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Jan 23 19:04:03 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, vol_name:cephfs) < ""
Jan 23 19:04:03 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "auth_id": "eve48", "format": "json"}]: dispatch
Jan 23 19:04:03 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, vol_name:cephfs) < ""
Jan 23 19:04:03 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e
Jan 23 19:04:03 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=eve48,client_metadata.root=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e],prefix=session evict} (starting...)
Jan 23 19:04:03 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:04:03 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, vol_name:cephfs) < ""
Jan 23 19:04:04 compute-0 podman[247774]: 2026-01-23 19:04:04.013691458 +0000 UTC m=+0.089415890 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 23 19:04:04 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:04:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 23 19:04:04 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:04:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 23 19:04:04 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 23 19:04:04 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 23 19:04:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:04 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:04:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:04:04 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:04:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:04:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:04 compute-0 nova_compute[240119]: 2026-01-23 19:04:04.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:04:04 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2cc197a2-1b4a-427e-9d00-dd49916b35df", "format": "json"}]: dispatch
Jan 23 19:04:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2cc197a2-1b4a-427e-9d00-dd49916b35df, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2cc197a2-1b4a-427e-9d00-dd49916b35df, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:04 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:04:04.491+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2cc197a2-1b4a-427e-9d00-dd49916b35df' of type subvolume
Jan 23 19:04:04 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2cc197a2-1b4a-427e-9d00-dd49916b35df' of type subvolume
Jan 23 19:04:04 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2cc197a2-1b4a-427e-9d00-dd49916b35df", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2cc197a2-1b4a-427e-9d00-dd49916b35df, vol_name:cephfs) < ""
Jan 23 19:04:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2cc197a2-1b4a-427e-9d00-dd49916b35df'' moved to trashcan
Jan 23 19:04:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:04:04 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2cc197a2-1b4a-427e-9d00-dd49916b35df, vol_name:cephfs) < ""
Jan 23 19:04:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 136 KiB/s wr, 16 op/s
Jan 23 19:04:04 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "auth_id": "eve48", "format": "json"}]: dispatch
Jan 23 19:04:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Jan 23 19:04:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Jan 23 19:04:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Jan 23 19:04:04 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "auth_id": "eve48", "format": "json"}]: dispatch
Jan 23 19:04:04 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:04:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:04:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 23 19:04:04 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 23 19:04:04 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:04:04 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2cc197a2-1b4a-427e-9d00-dd49916b35df", "format": "json"}]: dispatch
Jan 23 19:04:04 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2cc197a2-1b4a-427e-9d00-dd49916b35df", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:04 compute-0 ceph-mon[75097]: pgmap v1020: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 136 KiB/s wr, 16 op/s
Jan 23 19:04:05 compute-0 nova_compute[240119]: 2026-01-23 19:04:05.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:04:05 compute-0 nova_compute[240119]: 2026-01-23 19:04:05.350 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:04:05 compute-0 nova_compute[240119]: 2026-01-23 19:04:05.350 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:04:05 compute-0 nova_compute[240119]: 2026-01-23 19:04:05.447 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:04:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:04:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 88 KiB/s wr, 11 op/s
Jan 23 19:04:06 compute-0 ceph-mon[75097]: pgmap v1021: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 88 KiB/s wr, 11 op/s
Jan 23 19:04:06 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:04:06 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4795 writes, 21K keys, 4795 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4795 writes, 4795 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1550 writes, 7421 keys, 1550 commit groups, 1.0 writes per commit group, ingest: 9.93 MB, 0.02 MB/s
                                           Interval WAL: 1550 writes, 1550 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     25.4      1.00              0.08        12    0.083       0      0       0.0       0.0
                                             L6      1/0    7.46 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2     61.0     49.9      1.62              0.28        11    0.147     49K   5836       0.0       0.0
                                            Sum      1/0    7.46 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     37.7     40.6      2.62              0.36        23    0.114     49K   5836       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.3     32.6     32.7      1.71              0.18        12    0.142     29K   3633       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     61.0     49.9      1.62              0.28        11    0.147     49K   5836       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     25.5      1.00              0.08        11    0.090       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.025, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.10 GB read, 0.05 MB/s read, 2.6 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 1.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5639441df8d0#2 capacity: 304.00 MB usage: 9.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000105 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(577,8.99 MB,2.95761%) FilterBlock(24,142.80 KB,0.0458717%) IndexBlock(24,268.47 KB,0.0862423%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 23 19:04:07 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:04:07 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:04:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 23 19:04:07 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:04:07 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:04:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:04:07 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:07 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:07 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:04:07 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "auth_id": "eve47", "tenant_id": "182b21a4575a4acc866fcdd5f8991734", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:04:07 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, tenant_id:182b21a4575a4acc866fcdd5f8991734, vol_name:cephfs) < ""
Jan 23 19:04:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0)
Jan 23 19:04:07 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Jan 23 19:04:07 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID eve47 with tenant 182b21a4575a4acc866fcdd5f8991734
Jan 23 19:04:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f985db04-31d5-4dab-ade3-fdc049b4357a", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:04:07 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f985db04-31d5-4dab-ade3-fdc049b4357a", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:07 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f985db04-31d5-4dab-ade3-fdc049b4357a", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:07 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, tenant_id:182b21a4575a4acc866fcdd5f8991734, vol_name:cephfs) < ""
Jan 23 19:04:07 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:04:07 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:04:07 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:07 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:07 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "auth_id": "eve47", "tenant_id": "182b21a4575a4acc866fcdd5f8991734", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:04:07 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Jan 23 19:04:07 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f985db04-31d5-4dab-ade3-fdc049b4357a", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:07 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f985db04-31d5-4dab-ade3-fdc049b4357a", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:07 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ee5c9b95-36ca-40c5-b310-750aabee4b07", "format": "json"}]: dispatch
Jan 23 19:04:07 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ee5c9b95-36ca-40c5-b310-750aabee4b07, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:07 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ee5c9b95-36ca-40c5-b310-750aabee4b07, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:07 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:04:07.949+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee5c9b95-36ca-40c5-b310-750aabee4b07' of type subvolume
Jan 23 19:04:07 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee5c9b95-36ca-40c5-b310-750aabee4b07' of type subvolume
Jan 23 19:04:07 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ee5c9b95-36ca-40c5-b310-750aabee4b07", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:07 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee5c9b95-36ca-40c5-b310-750aabee4b07, vol_name:cephfs) < ""
Jan 23 19:04:07 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ee5c9b95-36ca-40c5-b310-750aabee4b07'' moved to trashcan
Jan 23 19:04:07 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:04:07 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee5c9b95-36ca-40c5-b310-750aabee4b07, vol_name:cephfs) < ""
Jan 23 19:04:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 88 KiB/s wr, 11 op/s
Jan 23 19:04:08 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ee5c9b95-36ca-40c5-b310-750aabee4b07", "format": "json"}]: dispatch
Jan 23 19:04:08 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ee5c9b95-36ca-40c5-b310-750aabee4b07", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:08 compute-0 ceph-mon[75097]: pgmap v1022: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 88 KiB/s wr, 11 op/s
Jan 23 19:04:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 146 KiB/s wr, 18 op/s
Jan 23 19:04:10 compute-0 ceph-mon[75097]: pgmap v1023: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 146 KiB/s wr, 18 op/s
Jan 23 19:04:10 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:04:10 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 23 19:04:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:04:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 23 19:04:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 23 19:04:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 23 19:04:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:11 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:04:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:04:11 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:04:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:04:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:04:11 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:04:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:04:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 23 19:04:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 23 19:04:11 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:04:11 compute-0 nova_compute[240119]: 2026-01-23 19:04:11.778 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:04:11 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "auth_id": "eve47", "format": "json"}]: dispatch
Jan 23 19:04:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, vol_name:cephfs) < ""
Jan 23 19:04:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0)
Jan 23 19:04:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Jan 23 19:04:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0)
Jan 23 19:04:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Jan 23 19:04:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Jan 23 19:04:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, vol_name:cephfs) < ""
Jan 23 19:04:11 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "auth_id": "eve47", "format": "json"}]: dispatch
Jan 23 19:04:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, vol_name:cephfs) < ""
Jan 23 19:04:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e
Jan 23 19:04:11 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=eve47,client_metadata.root=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e],prefix=session evict} (starting...)
Jan 23 19:04:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:04:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, vol_name:cephfs) < ""
Jan 23 19:04:11 compute-0 podman[247805]: 2026-01-23 19:04:11.97680784 +0000 UTC m=+0.056885274 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 19:04:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 106 KiB/s wr, 14 op/s
Jan 23 19:04:12 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "auth_id": "eve47", "format": "json"}]: dispatch
Jan 23 19:04:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Jan 23 19:04:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Jan 23 19:04:12 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Jan 23 19:04:12 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "auth_id": "eve47", "format": "json"}]: dispatch
Jan 23 19:04:12 compute-0 ceph-mon[75097]: pgmap v1024: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 106 KiB/s wr, 14 op/s
Jan 23 19:04:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:04:13.584 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:04:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:04:13.584 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:04:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:04:13.584 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:04:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 131 KiB/s wr, 18 op/s
Jan 23 19:04:14 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:04:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:04:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 23 19:04:14 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:04:14 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:04:15 compute-0 ceph-mon[75097]: pgmap v1025: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 131 KiB/s wr, 18 op/s
Jan 23 19:04:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:04:15 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:15 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:15 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:04:16 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:04:16 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:04:16 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:16 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:04:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 83 KiB/s wr, 11 op/s
Jan 23 19:04:17 compute-0 ceph-mon[75097]: pgmap v1026: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 83 KiB/s wr, 11 op/s
Jan 23 19:04:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 83 KiB/s wr, 11 op/s
Jan 23 19:04:18 compute-0 ceph-mon[75097]: pgmap v1027: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 83 KiB/s wr, 11 op/s
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 23 19:04:20 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:04:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 23 19:04:20 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 23 19:04:20 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:04:20 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 105 KiB/s wr, 14 op/s
Jan 23 19:04:20 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:04:20 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 23 19:04:20 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "auth_id": "eve49", "format": "json"}]: dispatch
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, vol_name:cephfs) < ""
Jan 23 19:04:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0)
Jan 23 19:04:20 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Jan 23 19:04:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0)
Jan 23 19:04:20 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Jan 23 19:04:20 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, vol_name:cephfs) < ""
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "auth_id": "eve49", "format": "json"}]: dispatch
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, vol_name:cephfs) < ""
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e
Jan 23 19:04:20 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=eve49,client_metadata.root=/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a/96161543-030c-47f7-ace7-5f41849f177e],prefix=session evict} (starting...)
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, vol_name:cephfs) < ""
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "format": "json"}]: dispatch
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f985db04-31d5-4dab-ade3-fdc049b4357a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f985db04-31d5-4dab-ade3-fdc049b4357a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f985db04-31d5-4dab-ade3-fdc049b4357a' of type subvolume
Jan 23 19:04:20 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:04:20.834+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f985db04-31d5-4dab-ade3-fdc049b4357a' of type subvolume
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, vol_name:cephfs) < ""
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f985db04-31d5-4dab-ade3-fdc049b4357a'' moved to trashcan
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:04:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f985db04-31d5-4dab-ade3-fdc049b4357a, vol_name:cephfs) < ""
Jan 23 19:04:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:04:21 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:04:21 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:04:21 compute-0 ceph-mon[75097]: pgmap v1028: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 105 KiB/s wr, 14 op/s
Jan 23 19:04:21 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "auth_id": "eve49", "format": "json"}]: dispatch
Jan 23 19:04:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Jan 23 19:04:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Jan 23 19:04:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Jan 23 19:04:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 46 KiB/s wr, 7 op/s
Jan 23 19:04:22 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "auth_id": "eve49", "format": "json"}]: dispatch
Jan 23 19:04:22 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "format": "json"}]: dispatch
Jan 23 19:04:22 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f985db04-31d5-4dab-ade3-fdc049b4357a", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:22 compute-0 ceph-mon[75097]: pgmap v1029: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 46 KiB/s wr, 7 op/s
Jan 23 19:04:22 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "58517681-eea7-4542-9aa9-1caa99f33856", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:04:22 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:58517681-eea7-4542-9aa9-1caa99f33856, vol_name:cephfs) < ""
Jan 23 19:04:22 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/58517681-eea7-4542-9aa9-1caa99f33856/ece6447e-bcfc-4f26-ad8c-6adbc2f5f1ce'.
Jan 23 19:04:22 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/58517681-eea7-4542-9aa9-1caa99f33856/.meta.tmp'
Jan 23 19:04:22 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/58517681-eea7-4542-9aa9-1caa99f33856/.meta.tmp' to config b'/volumes/_nogroup/58517681-eea7-4542-9aa9-1caa99f33856/.meta'
Jan 23 19:04:22 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:58517681-eea7-4542-9aa9-1caa99f33856, vol_name:cephfs) < ""
Jan 23 19:04:22 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "58517681-eea7-4542-9aa9-1caa99f33856", "format": "json"}]: dispatch
Jan 23 19:04:22 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:58517681-eea7-4542-9aa9-1caa99f33856, vol_name:cephfs) < ""
Jan 23 19:04:22 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:58517681-eea7-4542-9aa9-1caa99f33856, vol_name:cephfs) < ""
Jan 23 19:04:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:04:23 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:04:23 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:04:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:04:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 23 19:04:23 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:04:23 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice_bob with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:04:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:04:23 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:23 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:04:23 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "58517681-eea7-4542-9aa9-1caa99f33856", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:04:23 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "58517681-eea7-4542-9aa9-1caa99f33856", "format": "json"}]: dispatch
Jan 23 19:04:23 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:04:23 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:04:23 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:04:23 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:23 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 102 KiB/s wr, 14 op/s
Jan 23 19:04:24 compute-0 ceph-mon[75097]: pgmap v1030: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 102 KiB/s wr, 14 op/s
Jan 23 19:04:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:04:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 78 KiB/s wr, 10 op/s
Jan 23 19:04:26 compute-0 ceph-mon[75097]: pgmap v1031: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 78 KiB/s wr, 10 op/s
Jan 23 19:04:27 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5229476a-6a4f-49c2-9bcd-5dab192f60a7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:04:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5229476a-6a4f-49c2-9bcd-5dab192f60a7, vol_name:cephfs) < ""
Jan 23 19:04:27 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5229476a-6a4f-49c2-9bcd-5dab192f60a7/aa98ede0-bb9c-4d11-a1b5-374c41fb367d'.
Jan 23 19:04:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5229476a-6a4f-49c2-9bcd-5dab192f60a7/.meta.tmp'
Jan 23 19:04:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5229476a-6a4f-49c2-9bcd-5dab192f60a7/.meta.tmp' to config b'/volumes/_nogroup/5229476a-6a4f-49c2-9bcd-5dab192f60a7/.meta'
Jan 23 19:04:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5229476a-6a4f-49c2-9bcd-5dab192f60a7, vol_name:cephfs) < ""
Jan 23 19:04:27 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5229476a-6a4f-49c2-9bcd-5dab192f60a7", "format": "json"}]: dispatch
Jan 23 19:04:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5229476a-6a4f-49c2-9bcd-5dab192f60a7, vol_name:cephfs) < ""
Jan 23 19:04:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5229476a-6a4f-49c2-9bcd-5dab192f60a7, vol_name:cephfs) < ""
Jan 23 19:04:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:04:27 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:04:27 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:04:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 23 19:04:27 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:04:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 23 19:04:27 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 23 19:04:27 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 23 19:04:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:27 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:04:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:04:27 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:04:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:04:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:27 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5229476a-6a4f-49c2-9bcd-5dab192f60a7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:04:27 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5229476a-6a4f-49c2-9bcd-5dab192f60a7", "format": "json"}]: dispatch
Jan 23 19:04:27 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:04:27 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:04:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:04:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 23 19:04:27 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 23 19:04:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 78 KiB/s wr, 9 op/s
Jan 23 19:04:28 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:04:28 compute-0 ceph-mon[75097]: pgmap v1032: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 78 KiB/s wr, 9 op/s
Jan 23 19:04:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:04:28
Jan 23 19:04:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:04:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:04:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'volumes', 'backups', 'default.rgw.meta', 'default.rgw.log', 'images', '.mgr', 'cephfs.cephfs.meta']
Jan 23 19:04:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:04:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:04:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:04:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:04:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:04:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:04:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:04:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 53 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 101 KiB/s wr, 12 op/s
Jan 23 19:04:30 compute-0 ceph-mon[75097]: pgmap v1033: 305 pgs: 305 active+clean; 53 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 101 KiB/s wr, 12 op/s
Jan 23 19:04:30 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5229476a-6a4f-49c2-9bcd-5dab192f60a7", "format": "json"}]: dispatch
Jan 23 19:04:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5229476a-6a4f-49c2-9bcd-5dab192f60a7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5229476a-6a4f-49c2-9bcd-5dab192f60a7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:30 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:04:30.714+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5229476a-6a4f-49c2-9bcd-5dab192f60a7' of type subvolume
Jan 23 19:04:30 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5229476a-6a4f-49c2-9bcd-5dab192f60a7' of type subvolume
Jan 23 19:04:30 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5229476a-6a4f-49c2-9bcd-5dab192f60a7", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5229476a-6a4f-49c2-9bcd-5dab192f60a7, vol_name:cephfs) < ""
Jan 23 19:04:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5229476a-6a4f-49c2-9bcd-5dab192f60a7'' moved to trashcan
Jan 23 19:04:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:04:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5229476a-6a4f-49c2-9bcd-5dab192f60a7, vol_name:cephfs) < ""
Jan 23 19:04:30 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:04:30 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:04:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 23 19:04:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:04:30 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice_bob with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:04:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:04:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:04:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:04:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:04:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:04:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:04:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:04:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:04:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:04:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:04:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:04:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:04:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:04:31 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5229476a-6a4f-49c2-9bcd-5dab192f60a7", "format": "json"}]: dispatch
Jan 23 19:04:31 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5229476a-6a4f-49c2-9bcd-5dab192f60a7", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:31 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:04:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:04:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 53 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 79 KiB/s wr, 9 op/s
Jan 23 19:04:32 compute-0 ceph-mon[75097]: pgmap v1034: 305 pgs: 305 active+clean; 53 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 79 KiB/s wr, 9 op/s
Jan 23 19:04:33 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:04:33.815 155616 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:3d:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:eb:2e:86:1a:4d'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 19:04:33 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:04:33.816 155616 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 19:04:34 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "58517681-eea7-4542-9aa9-1caa99f33856", "format": "json"}]: dispatch
Jan 23 19:04:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:58517681-eea7-4542-9aa9-1caa99f33856, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:58517681-eea7-4542-9aa9-1caa99f33856, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:34 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:04:34.562+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '58517681-eea7-4542-9aa9-1caa99f33856' of type subvolume
Jan 23 19:04:34 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '58517681-eea7-4542-9aa9-1caa99f33856' of type subvolume
Jan 23 19:04:34 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "58517681-eea7-4542-9aa9-1caa99f33856", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:58517681-eea7-4542-9aa9-1caa99f33856, vol_name:cephfs) < ""
Jan 23 19:04:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/58517681-eea7-4542-9aa9-1caa99f33856'' moved to trashcan
Jan 23 19:04:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:04:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:58517681-eea7-4542-9aa9-1caa99f33856, vol_name:cephfs) < ""
Jan 23 19:04:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 53 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 108 KiB/s wr, 12 op/s
Jan 23 19:04:34 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:04:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:34 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:04:34.818 155616 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d273cd83-f2f8-4b91-b18f-957051ff2a6c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 19:04:35 compute-0 podman[247829]: 2026-01-23 19:04:35.038183909 +0000 UTC m=+0.104258180 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 19:04:35 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "58517681-eea7-4542-9aa9-1caa99f33856", "format": "json"}]: dispatch
Jan 23 19:04:35 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "58517681-eea7-4542-9aa9-1caa99f33856", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:35 compute-0 ceph-mon[75097]: pgmap v1035: 305 pgs: 305 active+clean; 53 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 108 KiB/s wr, 12 op/s
Jan 23 19:04:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 23 19:04:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:04:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 23 19:04:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 23 19:04:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 23 19:04:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:35 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:04:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:04:35 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:04:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:04:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:36 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 23 19:04:36 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:04:36 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:04:36 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 23 19:04:36 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 23 19:04:36 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:04:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:04:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 53 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 52 KiB/s wr, 5 op/s
Jan 23 19:04:37 compute-0 ceph-mon[75097]: pgmap v1036: 305 pgs: 305 active+clean; 53 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 52 KiB/s wr, 5 op/s
Jan 23 19:04:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 53 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 52 KiB/s wr, 5 op/s
Jan 23 19:04:38 compute-0 ceph-mon[75097]: pgmap v1037: 305 pgs: 305 active+clean; 53 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 52 KiB/s wr, 5 op/s
Jan 23 19:04:38 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:04:38 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:04:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 23 19:04:38 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:04:38 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice bob with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:04:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:04:38 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:38 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:38 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:04:39 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea32016d-d076-4cdb-87b1-5845df1c383d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:04:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea32016d-d076-4cdb-87b1-5845df1c383d, vol_name:cephfs) < ""
Jan 23 19:04:39 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ea32016d-d076-4cdb-87b1-5845df1c383d/3d3aa4a3-c446-44bf-8f3f-09134b386913'.
Jan 23 19:04:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea32016d-d076-4cdb-87b1-5845df1c383d/.meta.tmp'
Jan 23 19:04:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea32016d-d076-4cdb-87b1-5845df1c383d/.meta.tmp' to config b'/volumes/_nogroup/ea32016d-d076-4cdb-87b1-5845df1c383d/.meta'
Jan 23 19:04:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea32016d-d076-4cdb-87b1-5845df1c383d, vol_name:cephfs) < ""
Jan 23 19:04:39 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea32016d-d076-4cdb-87b1-5845df1c383d", "format": "json"}]: dispatch
Jan 23 19:04:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea32016d-d076-4cdb-87b1-5845df1c383d, vol_name:cephfs) < ""
Jan 23 19:04:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea32016d-d076-4cdb-87b1-5845df1c383d, vol_name:cephfs) < ""
Jan 23 19:04:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:04:39 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:04:39 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f6fe75ed-09c7-41b1-a5a6-78194558df7e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:04:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f6fe75ed-09c7-41b1-a5a6-78194558df7e, vol_name:cephfs) < ""
Jan 23 19:04:39 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f6fe75ed-09c7-41b1-a5a6-78194558df7e/fc40569c-59bd-4440-9d0b-4c4bcae98971'.
Jan 23 19:04:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f6fe75ed-09c7-41b1-a5a6-78194558df7e/.meta.tmp'
Jan 23 19:04:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f6fe75ed-09c7-41b1-a5a6-78194558df7e/.meta.tmp' to config b'/volumes/_nogroup/f6fe75ed-09c7-41b1-a5a6-78194558df7e/.meta'
Jan 23 19:04:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f6fe75ed-09c7-41b1-a5a6-78194558df7e, vol_name:cephfs) < ""
Jan 23 19:04:39 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f6fe75ed-09c7-41b1-a5a6-78194558df7e", "format": "json"}]: dispatch
Jan 23 19:04:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f6fe75ed-09c7-41b1-a5a6-78194558df7e, vol_name:cephfs) < ""
Jan 23 19:04:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f6fe75ed-09c7-41b1-a5a6-78194558df7e, vol_name:cephfs) < ""
Jan 23 19:04:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:04:39 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:04:39 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:04:39 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:04:39 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:39 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:39 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea32016d-d076-4cdb-87b1-5845df1c383d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:04:39 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea32016d-d076-4cdb-87b1-5845df1c383d", "format": "json"}]: dispatch
Jan 23 19:04:39 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:04:39 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f6fe75ed-09c7-41b1-a5a6-78194558df7e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:04:39 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f6fe75ed-09c7-41b1-a5a6-78194558df7e", "format": "json"}]: dispatch
Jan 23 19:04:39 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006658955772942907 of space, bias 1.0, pg target 0.19976867318828723 quantized to 32 (current 32)
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00019440092347737455 of space, bias 4.0, pg target 0.23328110817284947 quantized to 16 (current 16)
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:04:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 100 KiB/s wr, 11 op/s
Jan 23 19:04:40 compute-0 ceph-mon[75097]: pgmap v1038: 305 pgs: 305 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 100 KiB/s wr, 11 op/s
Jan 23 19:04:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:04:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 77 KiB/s wr, 9 op/s
Jan 23 19:04:42 compute-0 ceph-mon[75097]: pgmap v1039: 305 pgs: 305 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 77 KiB/s wr, 9 op/s
Jan 23 19:04:42 compute-0 podman[247856]: 2026-01-23 19:04:42.972284831 +0000 UTC m=+0.057016307 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f6fe75ed-09c7-41b1-a5a6-78194558df7e", "format": "json"}]: dispatch
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f6fe75ed-09c7-41b1-a5a6-78194558df7e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f6fe75ed-09c7-41b1-a5a6-78194558df7e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:43 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:04:43.113+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f6fe75ed-09c7-41b1-a5a6-78194558df7e' of type subvolume
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f6fe75ed-09c7-41b1-a5a6-78194558df7e' of type subvolume
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f6fe75ed-09c7-41b1-a5a6-78194558df7e", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f6fe75ed-09c7-41b1-a5a6-78194558df7e, vol_name:cephfs) < ""
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f6fe75ed-09c7-41b1-a5a6-78194558df7e'' moved to trashcan
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f6fe75ed-09c7-41b1-a5a6-78194558df7e, vol_name:cephfs) < ""
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 23 19:04:43 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:04:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 23 19:04:43 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 23 19:04:43 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:04:43 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dca5fa61-ae49-4ea7-b61f-b63caccb203e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dca5fa61-ae49-4ea7-b61f-b63caccb203e, vol_name:cephfs) < ""
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/dca5fa61-ae49-4ea7-b61f-b63caccb203e/8273c511-737c-4532-9ce0-819958a98905'.
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dca5fa61-ae49-4ea7-b61f-b63caccb203e/.meta.tmp'
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dca5fa61-ae49-4ea7-b61f-b63caccb203e/.meta.tmp' to config b'/volumes/_nogroup/dca5fa61-ae49-4ea7-b61f-b63caccb203e/.meta'
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dca5fa61-ae49-4ea7-b61f-b63caccb203e, vol_name:cephfs) < ""
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dca5fa61-ae49-4ea7-b61f-b63caccb203e", "format": "json"}]: dispatch
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dca5fa61-ae49-4ea7-b61f-b63caccb203e, vol_name:cephfs) < ""
Jan 23 19:04:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dca5fa61-ae49-4ea7-b61f-b63caccb203e, vol_name:cephfs) < ""
Jan 23 19:04:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:04:43 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:04:43 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f6fe75ed-09c7-41b1-a5a6-78194558df7e", "format": "json"}]: dispatch
Jan 23 19:04:43 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f6fe75ed-09c7-41b1-a5a6-78194558df7e", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:43 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:04:43 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:04:43 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 23 19:04:43 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 23 19:04:43 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:04:43 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dca5fa61-ae49-4ea7-b61f-b63caccb203e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:04:43 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:04:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 102 KiB/s wr, 12 op/s
Jan 23 19:04:44 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dca5fa61-ae49-4ea7-b61f-b63caccb203e", "format": "json"}]: dispatch
Jan 23 19:04:44 compute-0 ceph-mon[75097]: pgmap v1040: 305 pgs: 305 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 102 KiB/s wr, 12 op/s
Jan 23 19:04:44 compute-0 sudo[247876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:04:44 compute-0 sudo[247876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:04:44 compute-0 sudo[247876]: pam_unix(sudo:session): session closed for user root
Jan 23 19:04:45 compute-0 sudo[247901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 19:04:45 compute-0 sudo[247901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:04:45 compute-0 sudo[247901]: pam_unix(sudo:session): session closed for user root
Jan 23 19:04:45 compute-0 sudo[247956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:04:45 compute-0 sudo[247956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:04:45 compute-0 sudo[247956]: pam_unix(sudo:session): session closed for user root
Jan 23 19:04:45 compute-0 sudo[247981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 23 19:04:45 compute-0 sudo[247981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:04:45 compute-0 sudo[247981]: pam_unix(sudo:session): session closed for user root
Jan 23 19:04:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:04:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:04:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:04:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:04:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:04:46 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:04:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 19:04:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:04:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 19:04:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:04:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 19:04:46 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:04:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 19:04:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:04:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:04:46 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:04:46 compute-0 sudo[248024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:04:46 compute-0 sudo[248024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:04:46 compute-0 sudo[248024]: pam_unix(sudo:session): session closed for user root
Jan 23 19:04:46 compute-0 sudo[248049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 19:04:46 compute-0 sudo[248049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:04:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:04:46 compute-0 podman[248086]: 2026-01-23 19:04:46.460814665 +0000 UTC m=+0.048095476 container create 91e058729ccdfafedc65786d350f071bf516ffe8dc7eee79596c394056810762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldstine, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:04:46 compute-0 systemd[1]: Started libpod-conmon-91e058729ccdfafedc65786d350f071bf516ffe8dc7eee79596c394056810762.scope.
Jan 23 19:04:46 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:04:46 compute-0 podman[248086]: 2026-01-23 19:04:46.434481081 +0000 UTC m=+0.021761912 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:04:46 compute-0 podman[248086]: 2026-01-23 19:04:46.557006052 +0000 UTC m=+0.144286893 container init 91e058729ccdfafedc65786d350f071bf516ffe8dc7eee79596c394056810762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:04:46 compute-0 podman[248086]: 2026-01-23 19:04:46.566837136 +0000 UTC m=+0.154117947 container start 91e058729ccdfafedc65786d350f071bf516ffe8dc7eee79596c394056810762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldstine, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:04:46 compute-0 bold_goldstine[248102]: 167 167
Jan 23 19:04:46 compute-0 systemd[1]: libpod-91e058729ccdfafedc65786d350f071bf516ffe8dc7eee79596c394056810762.scope: Deactivated successfully.
Jan 23 19:04:46 compute-0 podman[248086]: 2026-01-23 19:04:46.574155458 +0000 UTC m=+0.161436309 container attach 91e058729ccdfafedc65786d350f071bf516ffe8dc7eee79596c394056810762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:04:46 compute-0 podman[248086]: 2026-01-23 19:04:46.574944377 +0000 UTC m=+0.162225198 container died 91e058729ccdfafedc65786d350f071bf516ffe8dc7eee79596c394056810762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:04:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 73 KiB/s wr, 9 op/s
Jan 23 19:04:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a01d786da2ec41b502ca939725e90418165a887d255b340bd42b3d1e293fe08-merged.mount: Deactivated successfully.
Jan 23 19:04:46 compute-0 podman[248086]: 2026-01-23 19:04:46.674103689 +0000 UTC m=+0.261384500 container remove 91e058729ccdfafedc65786d350f071bf516ffe8dc7eee79596c394056810762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldstine, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:04:46 compute-0 systemd[1]: libpod-conmon-91e058729ccdfafedc65786d350f071bf516ffe8dc7eee79596c394056810762.scope: Deactivated successfully.
Jan 23 19:04:46 compute-0 podman[248125]: 2026-01-23 19:04:46.849587955 +0000 UTC m=+0.060066602 container create b106d8b412e29dcbd819582b82e2bb281580cfed02ff66b4ee3f241d8aa2a798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_haslett, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:04:46 compute-0 systemd[1]: Started libpod-conmon-b106d8b412e29dcbd819582b82e2bb281580cfed02ff66b4ee3f241d8aa2a798.scope.
Jan 23 19:04:46 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:04:46 compute-0 podman[248125]: 2026-01-23 19:04:46.812675219 +0000 UTC m=+0.023153886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:04:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33efc9e2ca711c56703c923a707056c73e6145ee6287f94419d32bfeca62cdee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:04:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33efc9e2ca711c56703c923a707056c73e6145ee6287f94419d32bfeca62cdee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:04:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33efc9e2ca711c56703c923a707056c73e6145ee6287f94419d32bfeca62cdee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:04:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33efc9e2ca711c56703c923a707056c73e6145ee6287f94419d32bfeca62cdee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:04:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33efc9e2ca711c56703c923a707056c73e6145ee6287f94419d32bfeca62cdee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 19:04:46 compute-0 podman[248125]: 2026-01-23 19:04:46.920216449 +0000 UTC m=+0.130695116 container init b106d8b412e29dcbd819582b82e2bb281580cfed02ff66b4ee3f241d8aa2a798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_haslett, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:04:46 compute-0 podman[248125]: 2026-01-23 19:04:46.927662944 +0000 UTC m=+0.138141591 container start b106d8b412e29dcbd819582b82e2bb281580cfed02ff66b4ee3f241d8aa2a798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:04:46 compute-0 podman[248125]: 2026-01-23 19:04:46.931359926 +0000 UTC m=+0.141838603 container attach b106d8b412e29dcbd819582b82e2bb281580cfed02ff66b4ee3f241d8aa2a798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 23 19:04:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:04:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:04:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:04:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:04:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:04:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:04:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:04:47 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:04:47 compute-0 ceph-mon[75097]: pgmap v1041: 305 pgs: 305 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 73 KiB/s wr, 9 op/s
Jan 23 19:04:47 compute-0 goofy_haslett[248142]: --> passed data devices: 0 physical, 3 LVM
Jan 23 19:04:47 compute-0 goofy_haslett[248142]: --> All data devices are unavailable
Jan 23 19:04:47 compute-0 systemd[1]: libpod-b106d8b412e29dcbd819582b82e2bb281580cfed02ff66b4ee3f241d8aa2a798.scope: Deactivated successfully.
Jan 23 19:04:47 compute-0 podman[248125]: 2026-01-23 19:04:47.415027892 +0000 UTC m=+0.625506540 container died b106d8b412e29dcbd819582b82e2bb281580cfed02ff66b4ee3f241d8aa2a798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_haslett, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 23 19:04:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-33efc9e2ca711c56703c923a707056c73e6145ee6287f94419d32bfeca62cdee-merged.mount: Deactivated successfully.
Jan 23 19:04:47 compute-0 podman[248125]: 2026-01-23 19:04:47.464097671 +0000 UTC m=+0.674576318 container remove b106d8b412e29dcbd819582b82e2bb281580cfed02ff66b4ee3f241d8aa2a798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_haslett, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 23 19:04:47 compute-0 systemd[1]: libpod-conmon-b106d8b412e29dcbd819582b82e2bb281580cfed02ff66b4ee3f241d8aa2a798.scope: Deactivated successfully.
Jan 23 19:04:47 compute-0 sudo[248049]: pam_unix(sudo:session): session closed for user root
Jan 23 19:04:47 compute-0 sudo[248173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:04:47 compute-0 sudo[248173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:04:47 compute-0 sudo[248173]: pam_unix(sudo:session): session closed for user root
Jan 23 19:04:47 compute-0 sudo[248198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 19:04:47 compute-0 sudo[248198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:04:47 compute-0 podman[248236]: 2026-01-23 19:04:47.907811066 +0000 UTC m=+0.046780502 container create 9aabd5cdcd0e614ccde6eb953e42cbe322d4f399ed703c806bce08a35c95d5d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 23 19:04:47 compute-0 systemd[1]: Started libpod-conmon-9aabd5cdcd0e614ccde6eb953e42cbe322d4f399ed703c806bce08a35c95d5d7.scope.
Jan 23 19:04:47 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:04:47 compute-0 podman[248236]: 2026-01-23 19:04:47.885163814 +0000 UTC m=+0.024133280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:04:47 compute-0 podman[248236]: 2026-01-23 19:04:47.987304119 +0000 UTC m=+0.126273625 container init 9aabd5cdcd0e614ccde6eb953e42cbe322d4f399ed703c806bce08a35c95d5d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_chaum, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:04:47 compute-0 podman[248236]: 2026-01-23 19:04:47.995060182 +0000 UTC m=+0.134029618 container start 9aabd5cdcd0e614ccde6eb953e42cbe322d4f399ed703c806bce08a35c95d5d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_chaum, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:04:47 compute-0 musing_chaum[248252]: 167 167
Jan 23 19:04:47 compute-0 podman[248236]: 2026-01-23 19:04:47.999255116 +0000 UTC m=+0.138224552 container attach 9aabd5cdcd0e614ccde6eb953e42cbe322d4f399ed703c806bce08a35c95d5d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_chaum, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:04:47 compute-0 systemd[1]: libpod-9aabd5cdcd0e614ccde6eb953e42cbe322d4f399ed703c806bce08a35c95d5d7.scope: Deactivated successfully.
Jan 23 19:04:47 compute-0 podman[248236]: 2026-01-23 19:04:47.999896182 +0000 UTC m=+0.138865618 container died 9aabd5cdcd0e614ccde6eb953e42cbe322d4f399ed703c806bce08a35c95d5d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_chaum, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 23 19:04:48 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dca5fa61-ae49-4ea7-b61f-b63caccb203e", "format": "json"}]: dispatch
Jan 23 19:04:48 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dca5fa61-ae49-4ea7-b61f-b63caccb203e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:48 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dca5fa61-ae49-4ea7-b61f-b63caccb203e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:48 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:04:48.005+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dca5fa61-ae49-4ea7-b61f-b63caccb203e' of type subvolume
Jan 23 19:04:48 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dca5fa61-ae49-4ea7-b61f-b63caccb203e' of type subvolume
Jan 23 19:04:48 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dca5fa61-ae49-4ea7-b61f-b63caccb203e", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:48 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dca5fa61-ae49-4ea7-b61f-b63caccb203e, vol_name:cephfs) < ""
Jan 23 19:04:48 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dca5fa61-ae49-4ea7-b61f-b63caccb203e'' moved to trashcan
Jan 23 19:04:48 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:04:48 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dca5fa61-ae49-4ea7-b61f-b63caccb203e, vol_name:cephfs) < ""
Jan 23 19:04:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3bd0516efaab4b82fe46dcca69adde7c5b9dfac89dc7aafd63cc5efc0bae4c2-merged.mount: Deactivated successfully.
Jan 23 19:04:48 compute-0 podman[248236]: 2026-01-23 19:04:48.053171865 +0000 UTC m=+0.192141301 container remove 9aabd5cdcd0e614ccde6eb953e42cbe322d4f399ed703c806bce08a35c95d5d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_chaum, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 23 19:04:48 compute-0 systemd[1]: libpod-conmon-9aabd5cdcd0e614ccde6eb953e42cbe322d4f399ed703c806bce08a35c95d5d7.scope: Deactivated successfully.
Jan 23 19:04:48 compute-0 podman[248276]: 2026-01-23 19:04:48.20887404 +0000 UTC m=+0.043182464 container create 88074ff8ade3606564dab6d79d709046f1d7882f3faa5e653826ecd488680eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:04:48 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:04:48 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:04:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 23 19:04:48 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:04:48 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice bob with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:04:48 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 23 19:04:48 compute-0 systemd[1]: Started libpod-conmon-88074ff8ade3606564dab6d79d709046f1d7882f3faa5e653826ecd488680eac.scope.
Jan 23 19:04:48 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a05604a54c3d59e5089461b9845526721ca210fc0f0596777636f0d5dd2c20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a05604a54c3d59e5089461b9845526721ca210fc0f0596777636f0d5dd2c20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a05604a54c3d59e5089461b9845526721ca210fc0f0596777636f0d5dd2c20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a05604a54c3d59e5089461b9845526721ca210fc0f0596777636f0d5dd2c20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:04:48 compute-0 podman[248276]: 2026-01-23 19:04:48.285324608 +0000 UTC m=+0.119633072 container init 88074ff8ade3606564dab6d79d709046f1d7882f3faa5e653826ecd488680eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 23 19:04:48 compute-0 podman[248276]: 2026-01-23 19:04:48.190860722 +0000 UTC m=+0.025169176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:04:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:04:48 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:48 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:04:48 compute-0 podman[248276]: 2026-01-23 19:04:48.292048865 +0000 UTC m=+0.126357289 container start 88074ff8ade3606564dab6d79d709046f1d7882f3faa5e653826ecd488680eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hugle, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:04:48 compute-0 podman[248276]: 2026-01-23 19:04:48.296892035 +0000 UTC m=+0.131200469 container attach 88074ff8ade3606564dab6d79d709046f1d7882f3faa5e653826ecd488680eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Jan 23 19:04:48 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:48 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]: {
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:     "0": [
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:         {
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "devices": [
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "/dev/loop3"
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             ],
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "lv_name": "ceph_lv0",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "lv_size": "21470642176",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "name": "ceph_lv0",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "tags": {
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.cluster_name": "ceph",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.crush_device_class": "",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.encrypted": "0",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.objectstore": "bluestore",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.osd_id": "0",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.type": "block",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.vdo": "0",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.with_tpm": "0"
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             },
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "type": "block",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "vg_name": "ceph_vg0"
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:         }
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:     ],
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:     "1": [
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:         {
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "devices": [
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "/dev/loop4"
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             ],
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "lv_name": "ceph_lv1",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "lv_size": "21470642176",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "name": "ceph_lv1",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "tags": {
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.cluster_name": "ceph",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.crush_device_class": "",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.encrypted": "0",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.objectstore": "bluestore",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.osd_id": "1",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.type": "block",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.vdo": "0",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.with_tpm": "0"
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             },
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "type": "block",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "vg_name": "ceph_vg1"
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:         }
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:     ],
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:     "2": [
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:         {
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "devices": [
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "/dev/loop5"
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             ],
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "lv_name": "ceph_lv2",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "lv_size": "21470642176",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "name": "ceph_lv2",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "tags": {
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.cluster_name": "ceph",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.crush_device_class": "",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.encrypted": "0",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.objectstore": "bluestore",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.osd_id": "2",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.type": "block",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.vdo": "0",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:                 "ceph.with_tpm": "0"
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             },
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "type": "block",
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:             "vg_name": "ceph_vg2"
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:         }
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]:     ]
Jan 23 19:04:48 compute-0 beautiful_hugle[248292]: }
Jan 23 19:04:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 73 KiB/s wr, 9 op/s
Jan 23 19:04:48 compute-0 systemd[1]: libpod-88074ff8ade3606564dab6d79d709046f1d7882f3faa5e653826ecd488680eac.scope: Deactivated successfully.
Jan 23 19:04:48 compute-0 podman[248276]: 2026-01-23 19:04:48.603359223 +0000 UTC m=+0.437667667 container died 88074ff8ade3606564dab6d79d709046f1d7882f3faa5e653826ecd488680eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 23 19:04:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-32a05604a54c3d59e5089461b9845526721ca210fc0f0596777636f0d5dd2c20-merged.mount: Deactivated successfully.
Jan 23 19:04:49 compute-0 podman[248276]: 2026-01-23 19:04:49.131262198 +0000 UTC m=+0.965570632 container remove 88074ff8ade3606564dab6d79d709046f1d7882f3faa5e653826ecd488680eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 19:04:49 compute-0 systemd[1]: libpod-conmon-88074ff8ade3606564dab6d79d709046f1d7882f3faa5e653826ecd488680eac.scope: Deactivated successfully.
Jan 23 19:04:49 compute-0 sudo[248198]: pam_unix(sudo:session): session closed for user root
Jan 23 19:04:49 compute-0 sudo[248313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:04:49 compute-0 sudo[248313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:04:49 compute-0 sudo[248313]: pam_unix(sudo:session): session closed for user root
Jan 23 19:04:49 compute-0 sudo[248338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 19:04:49 compute-0 sudo[248338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:04:49 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dca5fa61-ae49-4ea7-b61f-b63caccb203e", "format": "json"}]: dispatch
Jan 23 19:04:49 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dca5fa61-ae49-4ea7-b61f-b63caccb203e", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:49 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:04:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:49 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:49 compute-0 ceph-mon[75097]: pgmap v1042: 305 pgs: 305 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 73 KiB/s wr, 9 op/s
Jan 23 19:04:49 compute-0 podman[248376]: 2026-01-23 19:04:49.585812473 +0000 UTC m=+0.054287319 container create 8c85bcaf2246a2faa0136361484ce62df36b89fa07b8376ff12bfcc765aca3e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_nightingale, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 19:04:49 compute-0 podman[248376]: 2026-01-23 19:04:49.554901166 +0000 UTC m=+0.023376042 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:04:49 compute-0 systemd[1]: Started libpod-conmon-8c85bcaf2246a2faa0136361484ce62df36b89fa07b8376ff12bfcc765aca3e8.scope.
Jan 23 19:04:49 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:04:49 compute-0 podman[248376]: 2026-01-23 19:04:49.921236479 +0000 UTC m=+0.389711355 container init 8c85bcaf2246a2faa0136361484ce62df36b89fa07b8376ff12bfcc765aca3e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_nightingale, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:04:49 compute-0 podman[248376]: 2026-01-23 19:04:49.931156025 +0000 UTC m=+0.399630871 container start 8c85bcaf2246a2faa0136361484ce62df36b89fa07b8376ff12bfcc765aca3e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_nightingale, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 23 19:04:49 compute-0 magical_nightingale[248392]: 167 167
Jan 23 19:04:49 compute-0 systemd[1]: libpod-8c85bcaf2246a2faa0136361484ce62df36b89fa07b8376ff12bfcc765aca3e8.scope: Deactivated successfully.
Jan 23 19:04:50 compute-0 podman[248376]: 2026-01-23 19:04:50.016433832 +0000 UTC m=+0.484908708 container attach 8c85bcaf2246a2faa0136361484ce62df36b89fa07b8376ff12bfcc765aca3e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Jan 23 19:04:50 compute-0 podman[248376]: 2026-01-23 19:04:50.017170281 +0000 UTC m=+0.485645127 container died 8c85bcaf2246a2faa0136361484ce62df36b89fa07b8376ff12bfcc765aca3e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 19:04:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9b135300c0354a93f011b28a5ba70d37f17154f23c9a386aef238c8ff5b5937-merged.mount: Deactivated successfully.
Jan 23 19:04:50 compute-0 podman[248376]: 2026-01-23 19:04:50.241017848 +0000 UTC m=+0.709492694 container remove 8c85bcaf2246a2faa0136361484ce62df36b89fa07b8376ff12bfcc765aca3e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 19:04:50 compute-0 systemd[1]: libpod-conmon-8c85bcaf2246a2faa0136361484ce62df36b89fa07b8376ff12bfcc765aca3e8.scope: Deactivated successfully.
Jan 23 19:04:50 compute-0 podman[248417]: 2026-01-23 19:04:50.466941187 +0000 UTC m=+0.104394574 container create 14e64dfa8bd032aff102f9038c7a51a4483d79bd4cbd41de246788f181d983ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cerf, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 23 19:04:50 compute-0 podman[248417]: 2026-01-23 19:04:50.387612577 +0000 UTC m=+0.025065994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:04:50 compute-0 systemd[1]: Started libpod-conmon-14e64dfa8bd032aff102f9038c7a51a4483d79bd4cbd41de246788f181d983ce.scope.
Jan 23 19:04:50 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ba5b1ece22f8fc60392ce52b4b05ca033d2ae47628bd2fdaebb638a72af5ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ba5b1ece22f8fc60392ce52b4b05ca033d2ae47628bd2fdaebb638a72af5ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ba5b1ece22f8fc60392ce52b4b05ca033d2ae47628bd2fdaebb638a72af5ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ba5b1ece22f8fc60392ce52b4b05ca033d2ae47628bd2fdaebb638a72af5ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:04:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 54 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 105 KiB/s wr, 12 op/s
Jan 23 19:04:50 compute-0 podman[248417]: 2026-01-23 19:04:50.62941898 +0000 UTC m=+0.266872377 container init 14e64dfa8bd032aff102f9038c7a51a4483d79bd4cbd41de246788f181d983ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:04:50 compute-0 podman[248417]: 2026-01-23 19:04:50.63585425 +0000 UTC m=+0.273307637 container start 14e64dfa8bd032aff102f9038c7a51a4483d79bd4cbd41de246788f181d983ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:04:50 compute-0 podman[248417]: 2026-01-23 19:04:50.689140183 +0000 UTC m=+0.326593570 container attach 14e64dfa8bd032aff102f9038c7a51a4483d79bd4cbd41de246788f181d983ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cerf, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:04:50 compute-0 ceph-mon[75097]: pgmap v1043: 305 pgs: 305 active+clean; 54 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 105 KiB/s wr, 12 op/s
Jan 23 19:04:51 compute-0 lvm[248511]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:04:51 compute-0 lvm[248513]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:04:51 compute-0 lvm[248513]: VG ceph_vg1 finished
Jan 23 19:04:51 compute-0 lvm[248511]: VG ceph_vg0 finished
Jan 23 19:04:51 compute-0 lvm[248515]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:04:51 compute-0 lvm[248515]: VG ceph_vg2 finished
Jan 23 19:04:51 compute-0 ecstatic_cerf[248433]: {}
Jan 23 19:04:51 compute-0 systemd[1]: libpod-14e64dfa8bd032aff102f9038c7a51a4483d79bd4cbd41de246788f181d983ce.scope: Deactivated successfully.
Jan 23 19:04:51 compute-0 systemd[1]: libpod-14e64dfa8bd032aff102f9038c7a51a4483d79bd4cbd41de246788f181d983ce.scope: Consumed 1.330s CPU time.
Jan 23 19:04:51 compute-0 podman[248417]: 2026-01-23 19:04:51.442460194 +0000 UTC m=+1.079913601 container died 14e64dfa8bd032aff102f9038c7a51a4483d79bd4cbd41de246788f181d983ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cerf, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 23 19:04:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:04:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-74ba5b1ece22f8fc60392ce52b4b05ca033d2ae47628bd2fdaebb638a72af5ab-merged.mount: Deactivated successfully.
Jan 23 19:04:51 compute-0 podman[248417]: 2026-01-23 19:04:51.799721423 +0000 UTC m=+1.437174850 container remove 14e64dfa8bd032aff102f9038c7a51a4483d79bd4cbd41de246788f181d983ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cerf, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:04:51 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea32016d-d076-4cdb-87b1-5845df1c383d", "format": "json"}]: dispatch
Jan 23 19:04:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ea32016d-d076-4cdb-87b1-5845df1c383d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ea32016d-d076-4cdb-87b1-5845df1c383d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:51 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:04:51.820+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea32016d-d076-4cdb-87b1-5845df1c383d' of type subvolume
Jan 23 19:04:51 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea32016d-d076-4cdb-87b1-5845df1c383d' of type subvolume
Jan 23 19:04:51 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea32016d-d076-4cdb-87b1-5845df1c383d", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea32016d-d076-4cdb-87b1-5845df1c383d, vol_name:cephfs) < ""
Jan 23 19:04:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ea32016d-d076-4cdb-87b1-5845df1c383d'' moved to trashcan
Jan 23 19:04:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:04:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea32016d-d076-4cdb-87b1-5845df1c383d, vol_name:cephfs) < ""
Jan 23 19:04:51 compute-0 sudo[248338]: pam_unix(sudo:session): session closed for user root
Jan 23 19:04:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:04:51 compute-0 systemd[1]: libpod-conmon-14e64dfa8bd032aff102f9038c7a51a4483d79bd4cbd41de246788f181d983ce.scope: Deactivated successfully.
Jan 23 19:04:52 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:04:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:04:52 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:04:52 compute-0 sudo[248529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 19:04:52 compute-0 sudo[248529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:04:52 compute-0 sudo[248529]: pam_unix(sudo:session): session closed for user root
Jan 23 19:04:52 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:04:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 23 19:04:52 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:04:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 23 19:04:52 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 23 19:04:52 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 23 19:04:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:52 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:04:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:04:52 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:04:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:04:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:04:52 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7581fe4e-6784-4ac5-b13d-2b32f94d378c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:04:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7581fe4e-6784-4ac5-b13d-2b32f94d378c, vol_name:cephfs) < ""
Jan 23 19:04:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 54 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 57 KiB/s wr, 6 op/s
Jan 23 19:04:52 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/7581fe4e-6784-4ac5-b13d-2b32f94d378c/9d568488-ab47-4810-961b-d742f695cd05'.
Jan 23 19:04:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7581fe4e-6784-4ac5-b13d-2b32f94d378c/.meta.tmp'
Jan 23 19:04:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7581fe4e-6784-4ac5-b13d-2b32f94d378c/.meta.tmp' to config b'/volumes/_nogroup/7581fe4e-6784-4ac5-b13d-2b32f94d378c/.meta'
Jan 23 19:04:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7581fe4e-6784-4ac5-b13d-2b32f94d378c, vol_name:cephfs) < ""
Jan 23 19:04:52 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7581fe4e-6784-4ac5-b13d-2b32f94d378c", "format": "json"}]: dispatch
Jan 23 19:04:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7581fe4e-6784-4ac5-b13d-2b32f94d378c, vol_name:cephfs) < ""
Jan 23 19:04:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7581fe4e-6784-4ac5-b13d-2b32f94d378c, vol_name:cephfs) < ""
Jan 23 19:04:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:04:52 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:04:53 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea32016d-d076-4cdb-87b1-5845df1c383d", "format": "json"}]: dispatch
Jan 23 19:04:53 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea32016d-d076-4cdb-87b1-5845df1c383d", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:53 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:04:53 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:04:53 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:04:53 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:04:53 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 23 19:04:53 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 23 19:04:53 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:04:53 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7581fe4e-6784-4ac5-b13d-2b32f94d378c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:04:53 compute-0 ceph-mon[75097]: pgmap v1044: 305 pgs: 305 active+clean; 54 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 57 KiB/s wr, 6 op/s
Jan 23 19:04:53 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:04:54 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7581fe4e-6784-4ac5-b13d-2b32f94d378c", "format": "json"}]: dispatch
Jan 23 19:04:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 55 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 84 KiB/s wr, 10 op/s
Jan 23 19:04:55 compute-0 ceph-mon[75097]: pgmap v1045: 305 pgs: 305 active+clean; 55 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 84 KiB/s wr, 10 op/s
Jan 23 19:04:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:04:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:04:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 23 19:04:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:04:56 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:04:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:04:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:56 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:04:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:04:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "20bc8457-64c2-49cd-863d-7d9377e1a1fe", "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:04:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:20bc8457-64c2-49cd-863d-7d9377e1a1fe, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Jan 23 19:04:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:20bc8457-64c2-49cd-863d-7d9377e1a1fe, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Jan 23 19:04:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 55 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 59 KiB/s wr, 7 op/s
Jan 23 19:04:56 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:04:56 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:04:56 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:04:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:04:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3873790988' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:04:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:04:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3873790988' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:04:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:04:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "20bc8457-64c2-49cd-863d-7d9377e1a1fe", "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:04:57 compute-0 ceph-mon[75097]: pgmap v1046: 305 pgs: 305 active+clean; 55 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 59 KiB/s wr, 7 op/s
Jan 23 19:04:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/3873790988' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:04:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/3873790988' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:04:57 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7581fe4e-6784-4ac5-b13d-2b32f94d378c", "format": "json"}]: dispatch
Jan 23 19:04:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7581fe4e-6784-4ac5-b13d-2b32f94d378c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7581fe4e-6784-4ac5-b13d-2b32f94d378c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:04:57 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:04:57.848+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7581fe4e-6784-4ac5-b13d-2b32f94d378c' of type subvolume
Jan 23 19:04:57 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7581fe4e-6784-4ac5-b13d-2b32f94d378c' of type subvolume
Jan 23 19:04:57 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7581fe4e-6784-4ac5-b13d-2b32f94d378c", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7581fe4e-6784-4ac5-b13d-2b32f94d378c, vol_name:cephfs) < ""
Jan 23 19:04:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7581fe4e-6784-4ac5-b13d-2b32f94d378c'' moved to trashcan
Jan 23 19:04:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:04:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7581fe4e-6784-4ac5-b13d-2b32f94d378c, vol_name:cephfs) < ""
Jan 23 19:04:58 compute-0 nova_compute[240119]: 2026-01-23 19:04:58.362 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:04:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 55 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 59 KiB/s wr, 7 op/s
Jan 23 19:04:59 compute-0 nova_compute[240119]: 2026-01-23 19:04:59.347 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:04:59 compute-0 nova_compute[240119]: 2026-01-23 19:04:59.376 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:04:59 compute-0 nova_compute[240119]: 2026-01-23 19:04:59.377 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:04:59 compute-0 nova_compute[240119]: 2026-01-23 19:04:59.377 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:04:59 compute-0 nova_compute[240119]: 2026-01-23 19:04:59.377 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:04:59 compute-0 nova_compute[240119]: 2026-01-23 19:04:59.378 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:04:59 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7581fe4e-6784-4ac5-b13d-2b32f94d378c", "format": "json"}]: dispatch
Jan 23 19:04:59 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7581fe4e-6784-4ac5-b13d-2b32f94d378c", "force": true, "format": "json"}]: dispatch
Jan 23 19:04:59 compute-0 ceph-mon[75097]: pgmap v1047: 305 pgs: 305 active+clean; 55 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 59 KiB/s wr, 7 op/s
Jan 23 19:04:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:04:59 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/189883597' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:04:59 compute-0 nova_compute[240119]: 2026-01-23 19:04:59.956 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:05:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:05:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:05:00 compute-0 nova_compute[240119]: 2026-01-23 19:05:00.106 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:05:00 compute-0 nova_compute[240119]: 2026-01-23 19:05:00.107 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5080MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:05:00 compute-0 nova_compute[240119]: 2026-01-23 19:05:00.107 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:05:00 compute-0 nova_compute[240119]: 2026-01-23 19:05:00.108 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:05:00 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:05:00 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 23 19:05:00 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:05:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 23 19:05:00 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 23 19:05:00 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 23 19:05:00 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:00 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:05:00 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:00 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:05:00 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:05:00 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:05:00 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:00 compute-0 nova_compute[240119]: 2026-01-23 19:05:00.401 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:05:00 compute-0 nova_compute[240119]: 2026-01-23 19:05:00.402 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:05:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:05:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:05:00 compute-0 nova_compute[240119]: 2026-01-23 19:05:00.474 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Refreshing inventories for resource provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 19:05:00 compute-0 nova_compute[240119]: 2026-01-23 19:05:00.551 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Updating ProviderTree inventory for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 19:05:00 compute-0 nova_compute[240119]: 2026-01-23 19:05:00.552 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Updating inventory in ProviderTree for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 19:05:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:05:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:05:00 compute-0 nova_compute[240119]: 2026-01-23 19:05:00.566 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Refreshing aggregate associations for resource provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 19:05:00 compute-0 nova_compute[240119]: 2026-01-23 19:05:00.585 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Refreshing trait associations for resource provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE42,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 19:05:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 55 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 KiB/s wr, 10 op/s
Jan 23 19:05:00 compute-0 nova_compute[240119]: 2026-01-23 19:05:00.601 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:05:00 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/189883597' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:05:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:05:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 23 19:05:00 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 23 19:05:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:05:01 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/747790149' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:05:01 compute-0 nova_compute[240119]: 2026-01-23 19:05:01.209 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:05:01 compute-0 nova_compute[240119]: 2026-01-23 19:05:01.214 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:05:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:05:01 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:05:01 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:05:01 compute-0 ceph-mon[75097]: pgmap v1048: 305 pgs: 305 active+clean; 55 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 KiB/s wr, 10 op/s
Jan 23 19:05:01 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/747790149' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:05:02 compute-0 nova_compute[240119]: 2026-01-23 19:05:02.161 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:05:02 compute-0 nova_compute[240119]: 2026-01-23 19:05:02.163 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:05:02 compute-0 nova_compute[240119]: 2026-01-23 19:05:02.163 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:05:02 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "20bc8457-64c2-49cd-863d-7d9377e1a1fe", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:20bc8457-64c2-49cd-863d-7d9377e1a1fe, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Jan 23 19:05:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:20bc8457-64c2-49cd-863d-7d9377e1a1fe, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Jan 23 19:05:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 55 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 54 KiB/s wr, 7 op/s
Jan 23 19:05:02 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "20bc8457-64c2-49cd-863d-7d9377e1a1fe", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:02 compute-0 ceph-mon[75097]: pgmap v1049: 305 pgs: 305 active+clean; 55 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 54 KiB/s wr, 7 op/s
Jan 23 19:05:02 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "6b31e860-10ee-43b8-b69b-3931b32aba37", "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:05:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:6b31e860-10ee-43b8-b69b-3931b32aba37, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Jan 23 19:05:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:6b31e860-10ee-43b8-b69b-3931b32aba37, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Jan 23 19:05:03 compute-0 nova_compute[240119]: 2026-01-23 19:05:03.164 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:05:03 compute-0 nova_compute[240119]: 2026-01-23 19:05:03.165 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:05:03 compute-0 nova_compute[240119]: 2026-01-23 19:05:03.166 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:05:03 compute-0 nova_compute[240119]: 2026-01-23 19:05:03.166 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:05:03 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:05:03 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:05:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 23 19:05:03 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:05:03 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:05:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:05:03 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:05:03 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:05:03 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:05:03 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "6b31e860-10ee-43b8-b69b-3931b32aba37", "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:05:03 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:05:03 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:05:03 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:05:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 55 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 94 KiB/s wr, 12 op/s
Jan 23 19:05:05 compute-0 nova_compute[240119]: 2026-01-23 19:05:05.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:05:05 compute-0 nova_compute[240119]: 2026-01-23 19:05:05.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:05:05 compute-0 nova_compute[240119]: 2026-01-23 19:05:05.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:05:05 compute-0 nova_compute[240119]: 2026-01-23 19:05:05.383 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:05:05 compute-0 nova_compute[240119]: 2026-01-23 19:05:05.383 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:05:05 compute-0 nova_compute[240119]: 2026-01-23 19:05:05.383 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:05:06 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:05:06 compute-0 ceph-mon[75097]: pgmap v1050: 305 pgs: 305 active+clean; 55 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 94 KiB/s wr, 12 op/s
Jan 23 19:05:06 compute-0 podman[248603]: 2026-01-23 19:05:06.040287494 +0000 UTC m=+0.118523074 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 23 19:05:06 compute-0 nova_compute[240119]: 2026-01-23 19:05:06.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:05:06 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "6b31e860-10ee-43b8-b69b-3931b32aba37", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:6b31e860-10ee-43b8-b69b-3931b32aba37, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Jan 23 19:05:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:6b31e860-10ee-43b8-b69b-3931b32aba37, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Jan 23 19:05:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:05:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 55 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 67 KiB/s wr, 8 op/s
Jan 23 19:05:06 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7c8fd20a-b01b-4f68-87ff-9188d64cf184", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:05:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7c8fd20a-b01b-4f68-87ff-9188d64cf184, vol_name:cephfs) < ""
Jan 23 19:05:07 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "6b31e860-10ee-43b8-b69b-3931b32aba37", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:07 compute-0 ceph-mon[75097]: pgmap v1051: 305 pgs: 305 active+clean; 55 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 67 KiB/s wr, 8 op/s
Jan 23 19:05:07 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/7c8fd20a-b01b-4f68-87ff-9188d64cf184/5ec7af24-272a-498e-b331-4b1a4ef130ee'.
Jan 23 19:05:07 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7c8fd20a-b01b-4f68-87ff-9188d64cf184/.meta.tmp'
Jan 23 19:05:07 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7c8fd20a-b01b-4f68-87ff-9188d64cf184/.meta.tmp' to config b'/volumes/_nogroup/7c8fd20a-b01b-4f68-87ff-9188d64cf184/.meta'
Jan 23 19:05:07 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7c8fd20a-b01b-4f68-87ff-9188d64cf184, vol_name:cephfs) < ""
Jan 23 19:05:07 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7c8fd20a-b01b-4f68-87ff-9188d64cf184", "format": "json"}]: dispatch
Jan 23 19:05:07 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7c8fd20a-b01b-4f68-87ff-9188d64cf184, vol_name:cephfs) < ""
Jan 23 19:05:07 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7c8fd20a-b01b-4f68-87ff-9188d64cf184, vol_name:cephfs) < ""
Jan 23 19:05:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:05:07 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:05:08 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:05:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 23 19:05:08 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:05:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 23 19:05:08 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 23 19:05:08 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 23 19:05:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:08 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:05:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:05:08 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:05:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:05:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:08 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7c8fd20a-b01b-4f68-87ff-9188d64cf184", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:05:08 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:05:08 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:05:08 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 23 19:05:08 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 23 19:05:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 55 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 67 KiB/s wr, 8 op/s
Jan 23 19:05:09 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7c8fd20a-b01b-4f68-87ff-9188d64cf184", "format": "json"}]: dispatch
Jan 23 19:05:09 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:05:09 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:05:09 compute-0 ceph-mon[75097]: pgmap v1052: 305 pgs: 305 active+clean; 55 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 67 KiB/s wr, 8 op/s
Jan 23 19:05:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 55 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 87 KiB/s wr, 11 op/s
Jan 23 19:05:10 compute-0 ceph-mon[75097]: pgmap v1053: 305 pgs: 305 active+clean; 55 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 87 KiB/s wr, 11 op/s
Jan 23 19:05:11 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:05:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:05:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 23 19:05:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:05:11 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice_bob with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:05:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:05:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:05:11 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:05:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:05:11 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7c8fd20a-b01b-4f68-87ff-9188d64cf184", "format": "json"}]: dispatch
Jan 23 19:05:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7c8fd20a-b01b-4f68-87ff-9188d64cf184, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:05:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7c8fd20a-b01b-4f68-87ff-9188d64cf184, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:05:11 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:05:11.509+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7c8fd20a-b01b-4f68-87ff-9188d64cf184' of type subvolume
Jan 23 19:05:11 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7c8fd20a-b01b-4f68-87ff-9188d64cf184' of type subvolume
Jan 23 19:05:11 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7c8fd20a-b01b-4f68-87ff-9188d64cf184", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7c8fd20a-b01b-4f68-87ff-9188d64cf184, vol_name:cephfs) < ""
Jan 23 19:05:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7c8fd20a-b01b-4f68-87ff-9188d64cf184'' moved to trashcan
Jan 23 19:05:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:05:11 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7c8fd20a-b01b-4f68-87ff-9188d64cf184, vol_name:cephfs) < ""
Jan 23 19:05:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:05:11 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:05:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:05:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:05:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:05:11 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7c8fd20a-b01b-4f68-87ff-9188d64cf184", "format": "json"}]: dispatch
Jan 23 19:05:11 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7c8fd20a-b01b-4f68-87ff-9188d64cf184", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 55 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 61 KiB/s wr, 8 op/s
Jan 23 19:05:12 compute-0 ceph-mon[75097]: pgmap v1054: 305 pgs: 305 active+clean; 55 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 61 KiB/s wr, 8 op/s
Jan 23 19:05:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:05:13.585 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:05:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:05:13.585 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:05:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:05:13.586 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:05:13 compute-0 podman[248629]: 2026-01-23 19:05:13.974914632 +0000 UTC m=+0.055935649 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 23 19:05:14 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c88504fb-ae34-4350-82e6-9d10001a8eb2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:05:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c88504fb-ae34-4350-82e6-9d10001a8eb2, vol_name:cephfs) < ""
Jan 23 19:05:14 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c88504fb-ae34-4350-82e6-9d10001a8eb2/b70a76b5-edec-4772-b13c-7b2ca6c4f906'.
Jan 23 19:05:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c88504fb-ae34-4350-82e6-9d10001a8eb2/.meta.tmp'
Jan 23 19:05:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c88504fb-ae34-4350-82e6-9d10001a8eb2/.meta.tmp' to config b'/volumes/_nogroup/c88504fb-ae34-4350-82e6-9d10001a8eb2/.meta'
Jan 23 19:05:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c88504fb-ae34-4350-82e6-9d10001a8eb2, vol_name:cephfs) < ""
Jan 23 19:05:14 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c88504fb-ae34-4350-82e6-9d10001a8eb2", "format": "json"}]: dispatch
Jan 23 19:05:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c88504fb-ae34-4350-82e6-9d10001a8eb2, vol_name:cephfs) < ""
Jan 23 19:05:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c88504fb-ae34-4350-82e6-9d10001a8eb2, vol_name:cephfs) < ""
Jan 23 19:05:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:05:14 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:05:14 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:05:14 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:05:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 23 19:05:14 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:05:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 23 19:05:14 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 23 19:05:14 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 23 19:05:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 56 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 84 KiB/s wr, 11 op/s
Jan 23 19:05:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:14 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:05:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:05:14 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:05:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:05:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:15 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c88504fb-ae34-4350-82e6-9d10001a8eb2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:05:15 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c88504fb-ae34-4350-82e6-9d10001a8eb2", "format": "json"}]: dispatch
Jan 23 19:05:15 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:05:15 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:05:15 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 23 19:05:15 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 23 19:05:15 compute-0 ceph-mon[75097]: pgmap v1055: 305 pgs: 305 active+clean; 56 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 84 KiB/s wr, 11 op/s
Jan 23 19:05:15 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:05:15 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "89e841f0-56dd-4e4e-8222-ae52cf7d9c48", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:05:15 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:89e841f0-56dd-4e4e-8222-ae52cf7d9c48, vol_name:cephfs) < ""
Jan 23 19:05:15 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/89e841f0-56dd-4e4e-8222-ae52cf7d9c48/d531052d-a685-4ac9-9d4c-5e839929ec39'.
Jan 23 19:05:15 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/89e841f0-56dd-4e4e-8222-ae52cf7d9c48/.meta.tmp'
Jan 23 19:05:15 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/89e841f0-56dd-4e4e-8222-ae52cf7d9c48/.meta.tmp' to config b'/volumes/_nogroup/89e841f0-56dd-4e4e-8222-ae52cf7d9c48/.meta'
Jan 23 19:05:15 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:89e841f0-56dd-4e4e-8222-ae52cf7d9c48, vol_name:cephfs) < ""
Jan 23 19:05:15 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "89e841f0-56dd-4e4e-8222-ae52cf7d9c48", "format": "json"}]: dispatch
Jan 23 19:05:15 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:89e841f0-56dd-4e4e-8222-ae52cf7d9c48, vol_name:cephfs) < ""
Jan 23 19:05:15 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:89e841f0-56dd-4e4e-8222-ae52cf7d9c48, vol_name:cephfs) < ""
Jan 23 19:05:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:05:15 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:05:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:05:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 56 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 44 KiB/s wr, 7 op/s
Jan 23 19:05:16 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:05:17 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "89e841f0-56dd-4e4e-8222-ae52cf7d9c48", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:05:17 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "89e841f0-56dd-4e4e-8222-ae52cf7d9c48", "format": "json"}]: dispatch
Jan 23 19:05:17 compute-0 ceph-mon[75097]: pgmap v1056: 305 pgs: 305 active+clean; 56 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 44 KiB/s wr, 7 op/s
Jan 23 19:05:18 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:05:18 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:05:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 23 19:05:18 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:05:18 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice_bob with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:05:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:05:18 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:05:18 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:05:18 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:05:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 56 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 44 KiB/s wr, 7 op/s
Jan 23 19:05:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:05:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:05:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:05:19 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "89e841f0-56dd-4e4e-8222-ae52cf7d9c48", "format": "json"}]: dispatch
Jan 23 19:05:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:89e841f0-56dd-4e4e-8222-ae52cf7d9c48, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:05:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:89e841f0-56dd-4e4e-8222-ae52cf7d9c48, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:05:19 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:05:19.573+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '89e841f0-56dd-4e4e-8222-ae52cf7d9c48' of type subvolume
Jan 23 19:05:19 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '89e841f0-56dd-4e4e-8222-ae52cf7d9c48' of type subvolume
Jan 23 19:05:19 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "89e841f0-56dd-4e4e-8222-ae52cf7d9c48", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:89e841f0-56dd-4e4e-8222-ae52cf7d9c48, vol_name:cephfs) < ""
Jan 23 19:05:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/89e841f0-56dd-4e4e-8222-ae52cf7d9c48'' moved to trashcan
Jan 23 19:05:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:05:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:89e841f0-56dd-4e4e-8222-ae52cf7d9c48, vol_name:cephfs) < ""
Jan 23 19:05:19 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c88504fb-ae34-4350-82e6-9d10001a8eb2", "format": "json"}]: dispatch
Jan 23 19:05:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c88504fb-ae34-4350-82e6-9d10001a8eb2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:05:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c88504fb-ae34-4350-82e6-9d10001a8eb2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:05:19 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:05:19.606+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c88504fb-ae34-4350-82e6-9d10001a8eb2' of type subvolume
Jan 23 19:05:19 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c88504fb-ae34-4350-82e6-9d10001a8eb2' of type subvolume
Jan 23 19:05:19 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c88504fb-ae34-4350-82e6-9d10001a8eb2", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c88504fb-ae34-4350-82e6-9d10001a8eb2, vol_name:cephfs) < ""
Jan 23 19:05:19 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:05:19 compute-0 ceph-mon[75097]: pgmap v1057: 305 pgs: 305 active+clean; 56 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 44 KiB/s wr, 7 op/s
Jan 23 19:05:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c88504fb-ae34-4350-82e6-9d10001a8eb2'' moved to trashcan
Jan 23 19:05:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:05:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c88504fb-ae34-4350-82e6-9d10001a8eb2, vol_name:cephfs) < ""
Jan 23 19:05:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 56 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 91 KiB/s wr, 12 op/s
Jan 23 19:05:20 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "89e841f0-56dd-4e4e-8222-ae52cf7d9c48", "format": "json"}]: dispatch
Jan 23 19:05:20 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "89e841f0-56dd-4e4e-8222-ae52cf7d9c48", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:20 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c88504fb-ae34-4350-82e6-9d10001a8eb2", "format": "json"}]: dispatch
Jan 23 19:05:20 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c88504fb-ae34-4350-82e6-9d10001a8eb2", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:21 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:05:21 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7099 writes, 27K keys, 7099 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7099 writes, 1492 syncs, 4.76 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1308 writes, 3048 keys, 1308 commit groups, 1.0 writes per commit group, ingest: 1.71 MB, 0.00 MB/s
                                           Interval WAL: 1308 writes, 521 syncs, 2.51 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 19:05:21 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:05:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 23 19:05:21 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:05:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 23 19:05:21 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 23 19:05:21 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 23 19:05:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:21 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:05:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:05:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:05:21 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:05:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:05:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:21 compute-0 ceph-mon[75097]: pgmap v1058: 305 pgs: 305 active+clean; 56 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 91 KiB/s wr, 12 op/s
Jan 23 19:05:21 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:05:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:05:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 23 19:05:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 23 19:05:21 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:05:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 56 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 70 KiB/s wr, 9 op/s
Jan 23 19:05:22 compute-0 ceph-mon[75097]: pgmap v1059: 305 pgs: 305 active+clean; 56 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 70 KiB/s wr, 9 op/s
Jan 23 19:05:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 99 KiB/s wr, 12 op/s
Jan 23 19:05:24 compute-0 ceph-mon[75097]: pgmap v1060: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 99 KiB/s wr, 12 op/s
Jan 23 19:05:25 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:05:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:05:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 23 19:05:25 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:05:25 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice bob with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:05:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:05:25 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:05:25 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:05:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:05:25 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:05:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:05:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:05:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:05:26 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:05:26 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3238 syncs, 3.51 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4350 writes, 15K keys, 4350 commit groups, 1.0 writes per commit group, ingest: 20.29 MB, 0.03 MB/s
                                           Interval WAL: 4350 writes, 1874 syncs, 2.32 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 19:05:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:05:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 76 KiB/s wr, 9 op/s
Jan 23 19:05:26 compute-0 ceph-mon[75097]: pgmap v1061: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 76 KiB/s wr, 9 op/s
Jan 23 19:05:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 76 KiB/s wr, 9 op/s
Jan 23 19:05:28 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:05:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:28 compute-0 ceph-mon[75097]: pgmap v1062: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 76 KiB/s wr, 9 op/s
Jan 23 19:05:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 23 19:05:28 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:05:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 23 19:05:28 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 23 19:05:28 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 23 19:05:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:28 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:05:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:05:28 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:05:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:05:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:05:28
Jan 23 19:05:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:05:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:05:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['default.rgw.log', 'volumes', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'backups', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'vms']
Jan 23 19:05:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:05:29 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:05:29 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:05:29 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 23 19:05:29 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 23 19:05:29 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:05:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:05:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:05:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:05:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:05:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:05:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:05:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 105 KiB/s wr, 12 op/s
Jan 23 19:05:30 compute-0 ceph-mon[75097]: pgmap v1063: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 105 KiB/s wr, 12 op/s
Jan 23 19:05:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:05:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:05:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:05:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:05:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:05:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:05:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:05:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:05:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:05:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:05:31 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:05:31 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 7449 writes, 29K keys, 7449 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7448 writes, 1611 syncs, 4.62 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1785 writes, 5185 keys, 1785 commit groups, 1.0 writes per commit group, ingest: 6.20 MB, 0.01 MB/s
                                           Interval WAL: 1784 writes, 696 syncs, 2.56 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 19:05:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:05:32 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:05:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:05:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 23 19:05:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:05:32 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice bob with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:05:32 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:05:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 58 KiB/s wr, 7 op/s
Jan 23 19:05:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:05:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:05:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:05:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:05:33 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2649806c-36b6-496b-8324-f7d5569ca22e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:05:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2649806c-36b6-496b-8324-f7d5569ca22e, vol_name:cephfs) < ""
Jan 23 19:05:33 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/2649806c-36b6-496b-8324-f7d5569ca22e/28d335de-0c89-48c0-958c-a1737bc8ff9f'.
Jan 23 19:05:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2649806c-36b6-496b-8324-f7d5569ca22e/.meta.tmp'
Jan 23 19:05:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2649806c-36b6-496b-8324-f7d5569ca22e/.meta.tmp' to config b'/volumes/_nogroup/2649806c-36b6-496b-8324-f7d5569ca22e/.meta'
Jan 23 19:05:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2649806c-36b6-496b-8324-f7d5569ca22e, vol_name:cephfs) < ""
Jan 23 19:05:33 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2649806c-36b6-496b-8324-f7d5569ca22e", "format": "json"}]: dispatch
Jan 23 19:05:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2649806c-36b6-496b-8324-f7d5569ca22e, vol_name:cephfs) < ""
Jan 23 19:05:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2649806c-36b6-496b-8324-f7d5569ca22e, vol_name:cephfs) < ""
Jan 23 19:05:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:05:33 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:05:33 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:05:33 compute-0 ceph-mon[75097]: pgmap v1064: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 58 KiB/s wr, 7 op/s
Jan 23 19:05:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:05:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:05:33 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:05:33 compute-0 sshd-session[248651]: Connection closed by authenticating user root 221.126.252.82 port 35953 [preauth]
Jan 23 19:05:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 80 KiB/s wr, 10 op/s
Jan 23 19:05:34 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2649806c-36b6-496b-8324-f7d5569ca22e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:05:34 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2649806c-36b6-496b-8324-f7d5569ca22e", "format": "json"}]: dispatch
Jan 23 19:05:34 compute-0 ceph-mgr[75390]: [devicehealth INFO root] Check health
Jan 23 19:05:34 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:05:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:05:35 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4'.
Jan 23 19:05:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/.meta.tmp'
Jan 23 19:05:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/.meta.tmp' to config b'/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/.meta'
Jan 23 19:05:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:05:35 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "format": "json"}]: dispatch
Jan 23 19:05:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:05:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:05:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:05:35 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:05:35 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:05:35.725 155616 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:3d:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:eb:2e:86:1a:4d'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 19:05:35 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:05:35.726 155616 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 19:05:36 compute-0 ceph-mon[75097]: pgmap v1065: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 80 KiB/s wr, 10 op/s
Jan 23 19:05:36 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:05:36 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "format": "json"}]: dispatch
Jan 23 19:05:36 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:05:36 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:05:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:05:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 51 KiB/s wr, 7 op/s
Jan 23 19:05:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 23 19:05:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:05:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 23 19:05:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 23 19:05:37 compute-0 podman[248653]: 2026-01-23 19:05:37.055870273 +0000 UTC m=+0.132644910 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 23 19:05:37 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:05:37.729 155616 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d273cd83-f2f8-4b91-b18f-957051ff2a6c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 19:05:38 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 23 19:05:38 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:05:38 compute-0 ceph-mon[75097]: pgmap v1066: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 51 KiB/s wr, 7 op/s
Jan 23 19:05:38 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:05:38 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 23 19:05:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 51 KiB/s wr, 7 op/s
Jan 23 19:05:38 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:38 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:05:38 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:38 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:05:38 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:05:38 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:05:38 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:38 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "2649806c-36b6-496b-8324-f7d5569ca22e", "snap_name": "41ef5335-9df1-41d2-a109-22ba85d3567b", "format": "json"}]: dispatch
Jan 23 19:05:38 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:41ef5335-9df1-41d2-a109-22ba85d3567b, sub_name:2649806c-36b6-496b-8324-f7d5569ca22e, vol_name:cephfs) < ""
Jan 23 19:05:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:41ef5335-9df1-41d2-a109-22ba85d3567b, sub_name:2649806c-36b6-496b-8324-f7d5569ca22e, vol_name:cephfs) < ""
Jan 23 19:05:39 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "34962dc4-5e08-41ca-b8ab-f76521ae76f2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:05:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:34962dc4-5e08-41ca-b8ab-f76521ae76f2, vol_name:cephfs) < ""
Jan 23 19:05:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 23 19:05:40 compute-0 ceph-mon[75097]: pgmap v1067: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 51 KiB/s wr, 7 op/s
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006658955772942907 of space, bias 1.0, pg target 0.19976867318828723 quantized to 32 (current 32)
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0002609791215093153 of space, bias 4.0, pg target 0.3131749458111784 quantized to 16 (current 16)
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/34962dc4-5e08-41ca-b8ab-f76521ae76f2/18be0f70-6e8c-433a-aaa5-ff75690f3db2'.
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/34962dc4-5e08-41ca-b8ab-f76521ae76f2/.meta.tmp'
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/34962dc4-5e08-41ca-b8ab-f76521ae76f2/.meta.tmp' to config b'/volumes/_nogroup/34962dc4-5e08-41ca-b8ab-f76521ae76f2/.meta'
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:34962dc4-5e08-41ca-b8ab-f76521ae76f2, vol_name:cephfs) < ""
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "34962dc4-5e08-41ca-b8ab-f76521ae76f2", "format": "json"}]: dispatch
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:34962dc4-5e08-41ca-b8ab-f76521ae76f2, vol_name:cephfs) < ""
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 76 KiB/s wr, 9 op/s
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:34962dc4-5e08-41ca-b8ab-f76521ae76f2, vol_name:cephfs) < ""
Jan 23 19:05:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:05:40 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "2649806c-36b6-496b-8324-f7d5569ca22e", "snap_name": "41ef5335-9df1-41d2-a109-22ba85d3567b_5a92e829-b779-4d6a-86d3-e30739ce4640", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:41ef5335-9df1-41d2-a109-22ba85d3567b_5a92e829-b779-4d6a-86d3-e30739ce4640, sub_name:2649806c-36b6-496b-8324-f7d5569ca22e, vol_name:cephfs) < ""
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2649806c-36b6-496b-8324-f7d5569ca22e/.meta.tmp'
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2649806c-36b6-496b-8324-f7d5569ca22e/.meta.tmp' to config b'/volumes/_nogroup/2649806c-36b6-496b-8324-f7d5569ca22e/.meta'
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:41ef5335-9df1-41d2-a109-22ba85d3567b_5a92e829-b779-4d6a-86d3-e30739ce4640, sub_name:2649806c-36b6-496b-8324-f7d5569ca22e, vol_name:cephfs) < ""
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "2649806c-36b6-496b-8324-f7d5569ca22e", "snap_name": "41ef5335-9df1-41d2-a109-22ba85d3567b", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:41ef5335-9df1-41d2-a109-22ba85d3567b, sub_name:2649806c-36b6-496b-8324-f7d5569ca22e, vol_name:cephfs) < ""
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2649806c-36b6-496b-8324-f7d5569ca22e/.meta.tmp'
Jan 23 19:05:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2649806c-36b6-496b-8324-f7d5569ca22e/.meta.tmp' to config b'/volumes/_nogroup/2649806c-36b6-496b-8324-f7d5569ca22e/.meta'
Jan 23 19:05:41 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:41ef5335-9df1-41d2-a109-22ba85d3567b, sub_name:2649806c-36b6-496b-8324-f7d5569ca22e, vol_name:cephfs) < ""
Jan 23 19:05:41 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:05:41 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:05:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 23 19:05:41 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:05:41 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:05:41 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 23 19:05:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:05:42 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:05:42 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "2649806c-36b6-496b-8324-f7d5569ca22e", "snap_name": "41ef5335-9df1-41d2-a109-22ba85d3567b", "format": "json"}]: dispatch
Jan 23 19:05:42 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "34962dc4-5e08-41ca-b8ab-f76521ae76f2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:05:42 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "34962dc4-5e08-41ca-b8ab-f76521ae76f2", "format": "json"}]: dispatch
Jan 23 19:05:42 compute-0 ceph-mon[75097]: pgmap v1068: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 76 KiB/s wr, 9 op/s
Jan 23 19:05:42 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:05:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:05:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:05:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:05:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:05:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 47 KiB/s wr, 5 op/s
Jan 23 19:05:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:05:43 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "2649806c-36b6-496b-8324-f7d5569ca22e", "snap_name": "41ef5335-9df1-41d2-a109-22ba85d3567b_5a92e829-b779-4d6a-86d3-e30739ce4640", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:43 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "2649806c-36b6-496b-8324-f7d5569ca22e", "snap_name": "41ef5335-9df1-41d2-a109-22ba85d3567b", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:43 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:05:43 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:05:43 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:05:43 compute-0 ceph-mon[75097]: pgmap v1069: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 47 KiB/s wr, 5 op/s
Jan 23 19:05:43 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "34962dc4-5e08-41ca-b8ab-f76521ae76f2", "auth_id": "tempest-cephx-id-1979362240", "tenant_id": "ef50bedea4bc4cfd892c55fa7305015f", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:05:43 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume authorize, sub_name:34962dc4-5e08-41ca-b8ab-f76521ae76f2, tenant_id:ef50bedea4bc4cfd892c55fa7305015f, vol_name:cephfs) < ""
Jan 23 19:05:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} v 0)
Jan 23 19:05:44 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:05:44 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID tempest-cephx-id-1979362240 with tenant ef50bedea4bc4cfd892c55fa7305015f
Jan 23 19:05:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 84 KiB/s wr, 10 op/s
Jan 23 19:05:45 compute-0 podman[248680]: 2026-01-23 19:05:45.026530243 +0000 UTC m=+0.095376001 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 19:05:45 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "34962dc4-5e08-41ca-b8ab-f76521ae76f2", "auth_id": "tempest-cephx-id-1979362240", "tenant_id": "ef50bedea4bc4cfd892c55fa7305015f", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:05:45 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:05:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/34962dc4-5e08-41ca-b8ab-f76521ae76f2/18be0f70-6e8c-433a-aaa5-ff75690f3db2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_34962dc4-5e08-41ca-b8ab-f76521ae76f2", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:05:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/34962dc4-5e08-41ca-b8ab-f76521ae76f2/18be0f70-6e8c-433a-aaa5-ff75690f3db2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_34962dc4-5e08-41ca-b8ab-f76521ae76f2", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:05:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 62 KiB/s wr, 7 op/s
Jan 23 19:05:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/34962dc4-5e08-41ca-b8ab-f76521ae76f2/18be0f70-6e8c-433a-aaa5-ff75690f3db2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_34962dc4-5e08-41ca-b8ab-f76521ae76f2", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:05:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 23 19:05:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 23 19:05:46 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 23 19:05:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:05:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 74 KiB/s wr, 8 op/s
Jan 23 19:05:48 compute-0 ceph-mon[75097]: pgmap v1070: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 84 KiB/s wr, 10 op/s
Jan 23 19:05:48 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/34962dc4-5e08-41ca-b8ab-f76521ae76f2/18be0f70-6e8c-433a-aaa5-ff75690f3db2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_34962dc4-5e08-41ca-b8ab-f76521ae76f2", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:05:48 compute-0 ceph-mon[75097]: pgmap v1071: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 62 KiB/s wr, 7 op/s
Jan 23 19:05:48 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/34962dc4-5e08-41ca-b8ab-f76521ae76f2/18be0f70-6e8c-433a-aaa5-ff75690f3db2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_34962dc4-5e08-41ca-b8ab-f76521ae76f2", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:05:48 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume authorize, sub_name:34962dc4-5e08-41ca-b8ab-f76521ae76f2, tenant_id:ef50bedea4bc4cfd892c55fa7305015f, vol_name:cephfs) < ""
Jan 23 19:05:49 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2649806c-36b6-496b-8324-f7d5569ca22e", "format": "json"}]: dispatch
Jan 23 19:05:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2649806c-36b6-496b-8324-f7d5569ca22e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:05:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2649806c-36b6-496b-8324-f7d5569ca22e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:05:49 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:05:49.053+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2649806c-36b6-496b-8324-f7d5569ca22e' of type subvolume
Jan 23 19:05:49 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2649806c-36b6-496b-8324-f7d5569ca22e' of type subvolume
Jan 23 19:05:49 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2649806c-36b6-496b-8324-f7d5569ca22e", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2649806c-36b6-496b-8324-f7d5569ca22e, vol_name:cephfs) < ""
Jan 23 19:05:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2649806c-36b6-496b-8324-f7d5569ca22e'' moved to trashcan
Jan 23 19:05:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:05:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2649806c-36b6-496b-8324-f7d5569ca22e, vol_name:cephfs) < ""
Jan 23 19:05:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 76 KiB/s wr, 8 op/s
Jan 23 19:05:50 compute-0 ceph-mon[75097]: osdmap e143: 3 total, 3 up, 3 in
Jan 23 19:05:50 compute-0 ceph-mon[75097]: pgmap v1073: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 74 KiB/s wr, 8 op/s
Jan 23 19:05:50 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2649806c-36b6-496b-8324-f7d5569ca22e", "format": "json"}]: dispatch
Jan 23 19:05:50 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2649806c-36b6-496b-8324-f7d5569ca22e", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:51 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:05:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:52 compute-0 ceph-mon[75097]: pgmap v1074: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 76 KiB/s wr, 8 op/s
Jan 23 19:05:52 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:05:52 compute-0 sudo[248700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:05:52 compute-0 sudo[248700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:05:52 compute-0 sudo[248700]: pam_unix(sudo:session): session closed for user root
Jan 23 19:05:52 compute-0 sudo[248725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 23 19:05:52 compute-0 sudo[248725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:05:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 23 19:05:52 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:05:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 23 19:05:52 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 23 19:05:52 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 23 19:05:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:52 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:05:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:05:52 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:05:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:05:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 76 KiB/s wr, 8 op/s
Jan 23 19:05:52 compute-0 sudo[248725]: pam_unix(sudo:session): session closed for user root
Jan 23 19:05:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:05:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:05:52 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:05:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:05:52 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:05:52 compute-0 sudo[248770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:05:52 compute-0 sudo[248770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:05:52 compute-0 sudo[248770]: pam_unix(sudo:session): session closed for user root
Jan 23 19:05:52 compute-0 sudo[248795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 19:05:52 compute-0 sudo[248795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:05:53 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:05:53 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 23 19:05:53 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 23 19:05:53 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:05:53 compute-0 ceph-mon[75097]: pgmap v1075: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 76 KiB/s wr, 8 op/s
Jan 23 19:05:53 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:05:53 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:05:53 compute-0 sudo[248795]: pam_unix(sudo:session): session closed for user root
Jan 23 19:05:53 compute-0 sudo[248850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:05:53 compute-0 sudo[248850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:05:53 compute-0 sudo[248850]: pam_unix(sudo:session): session closed for user root
Jan 23 19:05:53 compute-0 sudo[248875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- inventory --format=json-pretty --filter-for-batch
Jan 23 19:05:53 compute-0 sudo[248875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:05:53 compute-0 podman[248911]: 2026-01-23 19:05:53.812523919 +0000 UTC m=+0.069103348 container create 2ca2a807a42d92ddd22950ba8fcbdd515f314134de57a164ccbbe0f6a21524d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 23 19:05:53 compute-0 systemd[1]: Started libpod-conmon-2ca2a807a42d92ddd22950ba8fcbdd515f314134de57a164ccbbe0f6a21524d7.scope.
Jan 23 19:05:53 compute-0 podman[248911]: 2026-01-23 19:05:53.76916986 +0000 UTC m=+0.025749309 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:05:53 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:05:53 compute-0 podman[248911]: 2026-01-23 19:05:53.957373847 +0000 UTC m=+0.213953296 container init 2ca2a807a42d92ddd22950ba8fcbdd515f314134de57a164ccbbe0f6a21524d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_heisenberg, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 19:05:53 compute-0 podman[248911]: 2026-01-23 19:05:53.966625603 +0000 UTC m=+0.223205032 container start 2ca2a807a42d92ddd22950ba8fcbdd515f314134de57a164ccbbe0f6a21524d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_heisenberg, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:05:53 compute-0 podman[248911]: 2026-01-23 19:05:53.971137984 +0000 UTC m=+0.227717433 container attach 2ca2a807a42d92ddd22950ba8fcbdd515f314134de57a164ccbbe0f6a21524d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 23 19:05:53 compute-0 relaxed_heisenberg[248927]: 167 167
Jan 23 19:05:53 compute-0 systemd[1]: libpod-2ca2a807a42d92ddd22950ba8fcbdd515f314134de57a164ccbbe0f6a21524d7.scope: Deactivated successfully.
Jan 23 19:05:53 compute-0 podman[248911]: 2026-01-23 19:05:53.973345347 +0000 UTC m=+0.229924766 container died 2ca2a807a42d92ddd22950ba8fcbdd515f314134de57a164ccbbe0f6a21524d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:05:54 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "34962dc4-5e08-41ca-b8ab-f76521ae76f2", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:05:54 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume deauthorize, sub_name:34962dc4-5e08-41ca-b8ab-f76521ae76f2, vol_name:cephfs) < ""
Jan 23 19:05:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f552041cb0324150ccf32d7723d399455fc55499cb5a5cb518b218735f5bbad7-merged.mount: Deactivated successfully.
Jan 23 19:05:54 compute-0 podman[248911]: 2026-01-23 19:05:54.151224381 +0000 UTC m=+0.407803800 container remove 2ca2a807a42d92ddd22950ba8fcbdd515f314134de57a164ccbbe0f6a21524d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_heisenberg, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:05:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} v 0)
Jan 23 19:05:54 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:05:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} v 0)
Jan 23 19:05:54 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} : dispatch
Jan 23 19:05:54 compute-0 systemd[1]: libpod-conmon-2ca2a807a42d92ddd22950ba8fcbdd515f314134de57a164ccbbe0f6a21524d7.scope: Deactivated successfully.
Jan 23 19:05:54 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"}]': finished
Jan 23 19:05:54 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume deauthorize, sub_name:34962dc4-5e08-41ca-b8ab-f76521ae76f2, vol_name:cephfs) < ""
Jan 23 19:05:54 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "34962dc4-5e08-41ca-b8ab-f76521ae76f2", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:05:54 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume evict, sub_name:34962dc4-5e08-41ca-b8ab-f76521ae76f2, vol_name:cephfs) < ""
Jan 23 19:05:54 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1979362240, client_metadata.root=/volumes/_nogroup/34962dc4-5e08-41ca-b8ab-f76521ae76f2/18be0f70-6e8c-433a-aaa5-ff75690f3db2
Jan 23 19:05:54 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=tempest-cephx-id-1979362240,client_metadata.root=/volumes/_nogroup/34962dc4-5e08-41ca-b8ab-f76521ae76f2/18be0f70-6e8c-433a-aaa5-ff75690f3db2],prefix=session evict} (starting...)
Jan 23 19:05:54 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:05:54 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume evict, sub_name:34962dc4-5e08-41ca-b8ab-f76521ae76f2, vol_name:cephfs) < ""
Jan 23 19:05:54 compute-0 podman[248955]: 2026-01-23 19:05:54.321152502 +0000 UTC m=+0.043835602 container create aa94263de0c86dcb9efe21d36dd2b9975212bfeeac5e2fc8be3da9e23464d1f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 23 19:05:54 compute-0 systemd[1]: Started libpod-conmon-aa94263de0c86dcb9efe21d36dd2b9975212bfeeac5e2fc8be3da9e23464d1f8.scope.
Jan 23 19:05:54 compute-0 podman[248955]: 2026-01-23 19:05:54.301588914 +0000 UTC m=+0.024272034 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:05:54 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:05:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41817ce8bdb601e0dcd2320790b61a2cc6028036f158ef3bbcf2ca8e26f74d90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:05:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41817ce8bdb601e0dcd2320790b61a2cc6028036f158ef3bbcf2ca8e26f74d90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:05:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41817ce8bdb601e0dcd2320790b61a2cc6028036f158ef3bbcf2ca8e26f74d90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:05:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41817ce8bdb601e0dcd2320790b61a2cc6028036f158ef3bbcf2ca8e26f74d90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:05:54 compute-0 podman[248955]: 2026-01-23 19:05:54.419449123 +0000 UTC m=+0.142132243 container init aa94263de0c86dcb9efe21d36dd2b9975212bfeeac5e2fc8be3da9e23464d1f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_williams, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:05:54 compute-0 podman[248955]: 2026-01-23 19:05:54.426531746 +0000 UTC m=+0.149214846 container start aa94263de0c86dcb9efe21d36dd2b9975212bfeeac5e2fc8be3da9e23464d1f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_williams, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 23 19:05:54 compute-0 podman[248955]: 2026-01-23 19:05:54.440007354 +0000 UTC m=+0.162690464 container attach aa94263de0c86dcb9efe21d36dd2b9975212bfeeac5e2fc8be3da9e23464d1f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 23 19:05:54 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:05:54 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} : dispatch
Jan 23 19:05:54 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"}]': finished
Jan 23 19:05:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 74 KiB/s wr, 7 op/s
Jan 23 19:05:54 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "34962dc4-5e08-41ca-b8ab-f76521ae76f2", "format": "json"}]: dispatch
Jan 23 19:05:54 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:34962dc4-5e08-41ca-b8ab-f76521ae76f2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:05:54 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:34962dc4-5e08-41ca-b8ab-f76521ae76f2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:05:54 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '34962dc4-5e08-41ca-b8ab-f76521ae76f2' of type subvolume
Jan 23 19:05:54 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:05:54.868+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '34962dc4-5e08-41ca-b8ab-f76521ae76f2' of type subvolume
Jan 23 19:05:54 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "34962dc4-5e08-41ca-b8ab-f76521ae76f2", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:54 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:34962dc4-5e08-41ca-b8ab-f76521ae76f2, vol_name:cephfs) < ""
Jan 23 19:05:54 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/34962dc4-5e08-41ca-b8ab-f76521ae76f2'' moved to trashcan
Jan 23 19:05:54 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:05:54 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:34962dc4-5e08-41ca-b8ab-f76521ae76f2, vol_name:cephfs) < ""
Jan 23 19:05:54 compute-0 infallible_williams[248971]: [
Jan 23 19:05:54 compute-0 infallible_williams[248971]:     {
Jan 23 19:05:54 compute-0 infallible_williams[248971]:         "available": false,
Jan 23 19:05:54 compute-0 infallible_williams[248971]:         "being_replaced": false,
Jan 23 19:05:54 compute-0 infallible_williams[248971]:         "ceph_device_lvm": false,
Jan 23 19:05:54 compute-0 infallible_williams[248971]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:         "lsm_data": {},
Jan 23 19:05:54 compute-0 infallible_williams[248971]:         "lvs": [],
Jan 23 19:05:54 compute-0 infallible_williams[248971]:         "path": "/dev/sr0",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:         "rejected_reasons": [
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "Has a FileSystem",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "Insufficient space (<5GB)"
Jan 23 19:05:54 compute-0 infallible_williams[248971]:         ],
Jan 23 19:05:54 compute-0 infallible_williams[248971]:         "sys_api": {
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "actuators": null,
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "device_nodes": [
Jan 23 19:05:54 compute-0 infallible_williams[248971]:                 "sr0"
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             ],
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "devname": "sr0",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "human_readable_size": "482.00 KB",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "id_bus": "ata",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "model": "QEMU DVD-ROM",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "nr_requests": "2",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "parent": "/dev/sr0",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "partitions": {},
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "path": "/dev/sr0",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "removable": "1",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "rev": "2.5+",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "ro": "0",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "rotational": "1",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "sas_address": "",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "sas_device_handle": "",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "scheduler_mode": "mq-deadline",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "sectors": 0,
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "sectorsize": "2048",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "size": 493568.0,
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "support_discard": "2048",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "type": "disk",
Jan 23 19:05:54 compute-0 infallible_williams[248971]:             "vendor": "QEMU"
Jan 23 19:05:54 compute-0 infallible_williams[248971]:         }
Jan 23 19:05:54 compute-0 infallible_williams[248971]:     }
Jan 23 19:05:54 compute-0 infallible_williams[248971]: ]
Jan 23 19:05:55 compute-0 systemd[1]: libpod-aa94263de0c86dcb9efe21d36dd2b9975212bfeeac5e2fc8be3da9e23464d1f8.scope: Deactivated successfully.
Jan 23 19:05:55 compute-0 podman[248955]: 2026-01-23 19:05:55.024782777 +0000 UTC m=+0.747465897 container died aa94263de0c86dcb9efe21d36dd2b9975212bfeeac5e2fc8be3da9e23464d1f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_williams, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:05:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-41817ce8bdb601e0dcd2320790b61a2cc6028036f158ef3bbcf2ca8e26f74d90-merged.mount: Deactivated successfully.
Jan 23 19:05:55 compute-0 podman[248955]: 2026-01-23 19:05:55.134682661 +0000 UTC m=+0.857365781 container remove aa94263de0c86dcb9efe21d36dd2b9975212bfeeac5e2fc8be3da9e23464d1f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:05:55 compute-0 systemd[1]: libpod-conmon-aa94263de0c86dcb9efe21d36dd2b9975212bfeeac5e2fc8be3da9e23464d1f8.scope: Deactivated successfully.
Jan 23 19:05:55 compute-0 sudo[248875]: pam_unix(sudo:session): session closed for user root
Jan 23 19:05:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:05:55 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7c487935-93a6-4d7a-a6c9-f402d645b4e2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:05:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7c487935-93a6-4d7a-a6c9-f402d645b4e2, vol_name:cephfs) < ""
Jan 23 19:05:55 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:05:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:05:55 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:05:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:05:55 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:05:55 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/7c487935-93a6-4d7a-a6c9-f402d645b4e2/32b41355-85ca-4a8d-bb24-3bc26dbaf676'.
Jan 23 19:05:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 19:05:55 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:05:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 19:05:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7c487935-93a6-4d7a-a6c9-f402d645b4e2/.meta.tmp'
Jan 23 19:05:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7c487935-93a6-4d7a-a6c9-f402d645b4e2/.meta.tmp' to config b'/volumes/_nogroup/7c487935-93a6-4d7a-a6c9-f402d645b4e2/.meta'
Jan 23 19:05:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7c487935-93a6-4d7a-a6c9-f402d645b4e2, vol_name:cephfs) < ""
Jan 23 19:05:55 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:05:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 19:05:55 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:05:55 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7c487935-93a6-4d7a-a6c9-f402d645b4e2", "format": "json"}]: dispatch
Jan 23 19:05:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 19:05:55 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:05:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7c487935-93a6-4d7a-a6c9-f402d645b4e2, vol_name:cephfs) < ""
Jan 23 19:05:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:05:55 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:05:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7c487935-93a6-4d7a-a6c9-f402d645b4e2, vol_name:cephfs) < ""
Jan 23 19:05:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:05:55 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:05:55 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:05:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:05:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 23 19:05:55 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:05:55 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:05:55 compute-0 sudo[249838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:05:55 compute-0 sudo[249838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:05:55 compute-0 sudo[249838]: pam_unix(sudo:session): session closed for user root
Jan 23 19:05:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:05:55 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:05:55 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:05:55 compute-0 sudo[249863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 19:05:55 compute-0 sudo[249863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:05:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:05:55 compute-0 podman[249898]: 2026-01-23 19:05:55.662962184 +0000 UTC m=+0.059196627 container create 127236cb0a9b646269fb38291db4576565b0da9503989f593b0e01ddca58802d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mirzakhani, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 19:05:55 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "34962dc4-5e08-41ca-b8ab-f76521ae76f2", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:05:55 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "34962dc4-5e08-41ca-b8ab-f76521ae76f2", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:05:55 compute-0 ceph-mon[75097]: pgmap v1076: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 74 KiB/s wr, 7 op/s
Jan 23 19:05:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:05:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:05:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:05:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:05:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:05:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:05:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:05:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:05:55 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:05:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:05:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:05:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:05:55.686153) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195155686203, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2371, "num_deletes": 252, "total_data_size": 3070384, "memory_usage": 3113616, "flush_reason": "Manual Compaction"}
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 23 19:05:55 compute-0 systemd[1]: Started libpod-conmon-127236cb0a9b646269fb38291db4576565b0da9503989f593b0e01ddca58802d.scope.
Jan 23 19:05:55 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:05:55 compute-0 podman[249898]: 2026-01-23 19:05:55.624813742 +0000 UTC m=+0.021048195 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195155809664, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 2988331, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21359, "largest_seqno": 23729, "table_properties": {"data_size": 2977870, "index_size": 6379, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25475, "raw_average_key_size": 21, "raw_value_size": 2955339, "raw_average_value_size": 2473, "num_data_blocks": 284, "num_entries": 1195, "num_filter_entries": 1195, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769195005, "oldest_key_time": 1769195005, "file_creation_time": 1769195155, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 123565 microseconds, and 7692 cpu microseconds.
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:05:55.809715) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 2988331 bytes OK
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:05:55.809738) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:05:55.812997) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:05:55.813061) EVENT_LOG_v1 {"time_micros": 1769195155813047, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:05:55.813090) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3059583, prev total WAL file size 3059583, number of live WAL files 2.
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:05:55.814193) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(2918KB)], [50(7644KB)]
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195155814244, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10815806, "oldest_snapshot_seqno": -1}
Jan 23 19:05:55 compute-0 podman[249898]: 2026-01-23 19:05:55.815370046 +0000 UTC m=+0.211604509 container init 127236cb0a9b646269fb38291db4576565b0da9503989f593b0e01ddca58802d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 19:05:55 compute-0 podman[249898]: 2026-01-23 19:05:55.822270635 +0000 UTC m=+0.218505068 container start 127236cb0a9b646269fb38291db4576565b0da9503989f593b0e01ddca58802d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mirzakhani, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 19:05:55 compute-0 podman[249898]: 2026-01-23 19:05:55.826078577 +0000 UTC m=+0.222313010 container attach 127236cb0a9b646269fb38291db4576565b0da9503989f593b0e01ddca58802d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:05:55 compute-0 brave_mirzakhani[249914]: 167 167
Jan 23 19:05:55 compute-0 podman[249898]: 2026-01-23 19:05:55.827855331 +0000 UTC m=+0.224089784 container died 127236cb0a9b646269fb38291db4576565b0da9503989f593b0e01ddca58802d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 23 19:05:55 compute-0 systemd[1]: libpod-127236cb0a9b646269fb38291db4576565b0da9503989f593b0e01ddca58802d.scope: Deactivated successfully.
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5315 keys, 8957085 bytes, temperature: kUnknown
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195155892873, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8957085, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8919546, "index_size": 23157, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 131144, "raw_average_key_size": 24, "raw_value_size": 8822018, "raw_average_value_size": 1659, "num_data_blocks": 967, "num_entries": 5315, "num_filter_entries": 5315, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 1769195155, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:05:55.893135) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8957085 bytes
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:05:55.896158) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.4 rd, 113.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 7.5 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 5842, records dropped: 527 output_compression: NoCompression
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:05:55.896191) EVENT_LOG_v1 {"time_micros": 1769195155896178, "job": 26, "event": "compaction_finished", "compaction_time_micros": 78742, "compaction_time_cpu_micros": 20619, "output_level": 6, "num_output_files": 1, "total_output_size": 8957085, "num_input_records": 5842, "num_output_records": 5315, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195155896784, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195155898017, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:05:55.814061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:05:55.898123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:05:55.898132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:05:55.898136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:05:55.898141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:05:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:05:55.898144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:05:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e295122029727f1beeec17e74c78c7b4c1b16f0627fac87a20809b86b1e59387-merged.mount: Deactivated successfully.
Jan 23 19:05:55 compute-0 podman[249898]: 2026-01-23 19:05:55.933631004 +0000 UTC m=+0.329865477 container remove 127236cb0a9b646269fb38291db4576565b0da9503989f593b0e01ddca58802d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:05:55 compute-0 systemd[1]: libpod-conmon-127236cb0a9b646269fb38291db4576565b0da9503989f593b0e01ddca58802d.scope: Deactivated successfully.
Jan 23 19:05:56 compute-0 podman[249940]: 2026-01-23 19:05:56.138529648 +0000 UTC m=+0.098770413 container create a6bd891ed9ce96a926db720b310f5b5ca6da2dd0794dd8ee12680a1d0e96176f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cray, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:05:56 compute-0 podman[249940]: 2026-01-23 19:05:56.059648322 +0000 UTC m=+0.019889107 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:05:56 compute-0 systemd[1]: Started libpod-conmon-a6bd891ed9ce96a926db720b310f5b5ca6da2dd0794dd8ee12680a1d0e96176f.scope.
Jan 23 19:05:56 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:05:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb855bdf94eb0856f4c3376f74b6d2a64a525290cfc4c714d812b4690e38fb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:05:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb855bdf94eb0856f4c3376f74b6d2a64a525290cfc4c714d812b4690e38fb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:05:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb855bdf94eb0856f4c3376f74b6d2a64a525290cfc4c714d812b4690e38fb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:05:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb855bdf94eb0856f4c3376f74b6d2a64a525290cfc4c714d812b4690e38fb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:05:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb855bdf94eb0856f4c3376f74b6d2a64a525290cfc4c714d812b4690e38fb1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 19:05:56 compute-0 podman[249940]: 2026-01-23 19:05:56.21802694 +0000 UTC m=+0.178267735 container init a6bd891ed9ce96a926db720b310f5b5ca6da2dd0794dd8ee12680a1d0e96176f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:05:56 compute-0 podman[249940]: 2026-01-23 19:05:56.225834871 +0000 UTC m=+0.186075636 container start a6bd891ed9ce96a926db720b310f5b5ca6da2dd0794dd8ee12680a1d0e96176f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cray, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:05:56 compute-0 podman[249940]: 2026-01-23 19:05:56.229728357 +0000 UTC m=+0.189969182 container attach a6bd891ed9ce96a926db720b310f5b5ca6da2dd0794dd8ee12680a1d0e96176f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cray, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:05:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 74 KiB/s wr, 7 op/s
Jan 23 19:05:56 compute-0 recursing_cray[249956]: --> passed data devices: 0 physical, 3 LVM
Jan 23 19:05:56 compute-0 recursing_cray[249956]: --> All data devices are unavailable
Jan 23 19:05:56 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "34962dc4-5e08-41ca-b8ab-f76521ae76f2", "format": "json"}]: dispatch
Jan 23 19:05:56 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "34962dc4-5e08-41ca-b8ab-f76521ae76f2", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:56 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7c487935-93a6-4d7a-a6c9-f402d645b4e2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:05:56 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7c487935-93a6-4d7a-a6c9-f402d645b4e2", "format": "json"}]: dispatch
Jan 23 19:05:56 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:05:56 compute-0 ceph-mon[75097]: pgmap v1077: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 74 KiB/s wr, 7 op/s
Jan 23 19:05:56 compute-0 systemd[1]: libpod-a6bd891ed9ce96a926db720b310f5b5ca6da2dd0794dd8ee12680a1d0e96176f.scope: Deactivated successfully.
Jan 23 19:05:56 compute-0 podman[249940]: 2026-01-23 19:05:56.707108286 +0000 UTC m=+0.667349071 container died a6bd891ed9ce96a926db720b310f5b5ca6da2dd0794dd8ee12680a1d0e96176f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cray, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:05:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fb855bdf94eb0856f4c3376f74b6d2a64a525290cfc4c714d812b4690e38fb1-merged.mount: Deactivated successfully.
Jan 23 19:05:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:05:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/742157962' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:05:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:05:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/742157962' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:05:56 compute-0 podman[249940]: 2026-01-23 19:05:56.817741378 +0000 UTC m=+0.777982143 container remove a6bd891ed9ce96a926db720b310f5b5ca6da2dd0794dd8ee12680a1d0e96176f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 19:05:56 compute-0 systemd[1]: libpod-conmon-a6bd891ed9ce96a926db720b310f5b5ca6da2dd0794dd8ee12680a1d0e96176f.scope: Deactivated successfully.
Jan 23 19:05:56 compute-0 sudo[249863]: pam_unix(sudo:session): session closed for user root
Jan 23 19:05:56 compute-0 sudo[249988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:05:56 compute-0 sudo[249988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:05:56 compute-0 sudo[249988]: pam_unix(sudo:session): session closed for user root
Jan 23 19:05:56 compute-0 sudo[250013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 19:05:56 compute-0 sudo[250013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:05:57 compute-0 podman[250052]: 2026-01-23 19:05:57.263549436 +0000 UTC m=+0.044359685 container create 18a4eeb0762e67cc9843b37d0a4d3d83d06e6c43f82c115daa0d1ac47de4f2f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 23 19:05:57 compute-0 systemd[1]: Started libpod-conmon-18a4eeb0762e67cc9843b37d0a4d3d83d06e6c43f82c115daa0d1ac47de4f2f4.scope.
Jan 23 19:05:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:05:57 compute-0 podman[250052]: 2026-01-23 19:05:57.327840856 +0000 UTC m=+0.108651125 container init 18a4eeb0762e67cc9843b37d0a4d3d83d06e6c43f82c115daa0d1ac47de4f2f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chaplygin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:05:57 compute-0 podman[250052]: 2026-01-23 19:05:57.334192351 +0000 UTC m=+0.115002590 container start 18a4eeb0762e67cc9843b37d0a4d3d83d06e6c43f82c115daa0d1ac47de4f2f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chaplygin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 23 19:05:57 compute-0 podman[250052]: 2026-01-23 19:05:57.242584444 +0000 UTC m=+0.023394713 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:05:57 compute-0 exciting_chaplygin[250068]: 167 167
Jan 23 19:05:57 compute-0 podman[250052]: 2026-01-23 19:05:57.338085946 +0000 UTC m=+0.118896195 container attach 18a4eeb0762e67cc9843b37d0a4d3d83d06e6c43f82c115daa0d1ac47de4f2f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 19:05:57 compute-0 systemd[1]: libpod-18a4eeb0762e67cc9843b37d0a4d3d83d06e6c43f82c115daa0d1ac47de4f2f4.scope: Deactivated successfully.
Jan 23 19:05:57 compute-0 podman[250052]: 2026-01-23 19:05:57.339686525 +0000 UTC m=+0.120496764 container died 18a4eeb0762e67cc9843b37d0a4d3d83d06e6c43f82c115daa0d1ac47de4f2f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chaplygin, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 19:05:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-aad4e60ad69d0f65c089e2c24635e602cf308b22d028fb3f2b0ce7f133f93a9a-merged.mount: Deactivated successfully.
Jan 23 19:05:57 compute-0 podman[250052]: 2026-01-23 19:05:57.402835007 +0000 UTC m=+0.183645256 container remove 18a4eeb0762e67cc9843b37d0a4d3d83d06e6c43f82c115daa0d1ac47de4f2f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 19:05:57 compute-0 systemd[1]: libpod-conmon-18a4eeb0762e67cc9843b37d0a4d3d83d06e6c43f82c115daa0d1ac47de4f2f4.scope: Deactivated successfully.
Jan 23 19:05:57 compute-0 podman[250091]: 2026-01-23 19:05:57.576816957 +0000 UTC m=+0.043641486 container create c874aeaaecdacb14532e1c94cb8527c3bc71cc20281c319ada044a569e458487 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kilby, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 19:05:57 compute-0 systemd[1]: Started libpod-conmon-c874aeaaecdacb14532e1c94cb8527c3bc71cc20281c319ada044a569e458487.scope.
Jan 23 19:05:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1999d84a1480f252d2deaa0c82b9e355da65b77093838448009d075a8f9859d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1999d84a1480f252d2deaa0c82b9e355da65b77093838448009d075a8f9859d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1999d84a1480f252d2deaa0c82b9e355da65b77093838448009d075a8f9859d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1999d84a1480f252d2deaa0c82b9e355da65b77093838448009d075a8f9859d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:05:57 compute-0 podman[250091]: 2026-01-23 19:05:57.554063741 +0000 UTC m=+0.020888290 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:05:57 compute-0 podman[250091]: 2026-01-23 19:05:57.660355267 +0000 UTC m=+0.127179816 container init c874aeaaecdacb14532e1c94cb8527c3bc71cc20281c319ada044a569e458487 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kilby, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 23 19:05:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:05:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 23 19:05:57 compute-0 podman[250091]: 2026-01-23 19:05:57.66823296 +0000 UTC m=+0.135057489 container start c874aeaaecdacb14532e1c94cb8527c3bc71cc20281c319ada044a569e458487 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 23 19:05:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 23 19:05:57 compute-0 podman[250091]: 2026-01-23 19:05:57.674426611 +0000 UTC m=+0.141251140 container attach c874aeaaecdacb14532e1c94cb8527c3bc71cc20281c319ada044a569e458487 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kilby, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:05:57 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 23 19:05:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/742157962' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:05:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/742157962' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:05:57 compute-0 ceph-mon[75097]: osdmap e144: 3 total, 3 up, 3 in
Jan 23 19:05:57 compute-0 blissful_kilby[250107]: {
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:     "0": [
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:         {
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "devices": [
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "/dev/loop3"
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             ],
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "lv_name": "ceph_lv0",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "lv_size": "21470642176",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "name": "ceph_lv0",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "tags": {
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.cluster_name": "ceph",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.crush_device_class": "",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.encrypted": "0",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.objectstore": "bluestore",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.osd_id": "0",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.type": "block",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.vdo": "0",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.with_tpm": "0"
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             },
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "type": "block",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "vg_name": "ceph_vg0"
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:         }
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:     ],
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:     "1": [
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:         {
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "devices": [
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "/dev/loop4"
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             ],
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "lv_name": "ceph_lv1",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "lv_size": "21470642176",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "name": "ceph_lv1",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "tags": {
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.cluster_name": "ceph",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.crush_device_class": "",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.encrypted": "0",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.objectstore": "bluestore",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.osd_id": "1",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.type": "block",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.vdo": "0",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.with_tpm": "0"
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             },
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "type": "block",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "vg_name": "ceph_vg1"
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:         }
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:     ],
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:     "2": [
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:         {
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "devices": [
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "/dev/loop5"
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             ],
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "lv_name": "ceph_lv2",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "lv_size": "21470642176",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "name": "ceph_lv2",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "tags": {
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.cluster_name": "ceph",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.crush_device_class": "",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.encrypted": "0",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.objectstore": "bluestore",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.osd_id": "2",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.type": "block",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.vdo": "0",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:                 "ceph.with_tpm": "0"
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             },
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "type": "block",
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:             "vg_name": "ceph_vg2"
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:         }
Jan 23 19:05:57 compute-0 blissful_kilby[250107]:     ]
Jan 23 19:05:57 compute-0 blissful_kilby[250107]: }
Jan 23 19:05:57 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "58ccdce1-cbc8-4922-8f55-95da428422fa", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:05:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:58ccdce1-cbc8-4922-8f55-95da428422fa, vol_name:cephfs) < ""
Jan 23 19:05:58 compute-0 systemd[1]: libpod-c874aeaaecdacb14532e1c94cb8527c3bc71cc20281c319ada044a569e458487.scope: Deactivated successfully.
Jan 23 19:05:58 compute-0 podman[250091]: 2026-01-23 19:05:58.006510452 +0000 UTC m=+0.473334981 container died c874aeaaecdacb14532e1c94cb8527c3bc71cc20281c319ada044a569e458487 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/58ccdce1-cbc8-4922-8f55-95da428422fa/0fc65e1a-93c9-48b8-acca-6485b788fa73'.
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/58ccdce1-cbc8-4922-8f55-95da428422fa/.meta.tmp'
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/58ccdce1-cbc8-4922-8f55-95da428422fa/.meta.tmp' to config b'/volumes/_nogroup/58ccdce1-cbc8-4922-8f55-95da428422fa/.meta'
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:58ccdce1-cbc8-4922-8f55-95da428422fa, vol_name:cephfs) < ""
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "58ccdce1-cbc8-4922-8f55-95da428422fa", "format": "json"}]: dispatch
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:58ccdce1-cbc8-4922-8f55-95da428422fa, vol_name:cephfs) < ""
Jan 23 19:05:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-1999d84a1480f252d2deaa0c82b9e355da65b77093838448009d075a8f9859d5-merged.mount: Deactivated successfully.
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:58ccdce1-cbc8-4922-8f55-95da428422fa, vol_name:cephfs) < ""
Jan 23 19:05:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:05:58 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:05:58 compute-0 podman[250091]: 2026-01-23 19:05:58.064055818 +0000 UTC m=+0.530880347 container remove c874aeaaecdacb14532e1c94cb8527c3bc71cc20281c319ada044a569e458487 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kilby, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 19:05:58 compute-0 systemd[1]: libpod-conmon-c874aeaaecdacb14532e1c94cb8527c3bc71cc20281c319ada044a569e458487.scope: Deactivated successfully.
Jan 23 19:05:58 compute-0 sudo[250013]: pam_unix(sudo:session): session closed for user root
Jan 23 19:05:58 compute-0 sudo[250127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:05:58 compute-0 sudo[250127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:05:58 compute-0 sudo[250127]: pam_unix(sudo:session): session closed for user root
Jan 23 19:05:58 compute-0 sudo[250152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 19:05:58 compute-0 sudo[250152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:05:58 compute-0 nova_compute[240119]: 2026-01-23 19:05:58.344 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:05:58 compute-0 podman[250189]: 2026-01-23 19:05:58.555700475 +0000 UTC m=+0.052610616 container create 720f768c84fd6cd987b19a8e4ff2f5166d0e2fdd813c609f5245e6b3d63edbce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:05:58 compute-0 systemd[1]: Started libpod-conmon-720f768c84fd6cd987b19a8e4ff2f5166d0e2fdd813c609f5245e6b3d63edbce.scope.
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 74 KiB/s wr, 7 op/s
Jan 23 19:05:58 compute-0 podman[250189]: 2026-01-23 19:05:58.5268514 +0000 UTC m=+0.023761561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:05:58 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:58 compute-0 podman[250189]: 2026-01-23 19:05:58.643847088 +0000 UTC m=+0.140757249 container init 720f768c84fd6cd987b19a8e4ff2f5166d0e2fdd813c609f5245e6b3d63edbce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:05:58 compute-0 podman[250189]: 2026-01-23 19:05:58.650911731 +0000 UTC m=+0.147821872 container start 720f768c84fd6cd987b19a8e4ff2f5166d0e2fdd813c609f5245e6b3d63edbce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_robinson, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 19:05:58 compute-0 happy_robinson[250205]: 167 167
Jan 23 19:05:58 compute-0 podman[250189]: 2026-01-23 19:05:58.656465916 +0000 UTC m=+0.153376057 container attach 720f768c84fd6cd987b19a8e4ff2f5166d0e2fdd813c609f5245e6b3d63edbce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:05:58 compute-0 systemd[1]: libpod-720f768c84fd6cd987b19a8e4ff2f5166d0e2fdd813c609f5245e6b3d63edbce.scope: Deactivated successfully.
Jan 23 19:05:58 compute-0 conmon[250205]: conmon 720f768c84fd6cd987b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-720f768c84fd6cd987b19a8e4ff2f5166d0e2fdd813c609f5245e6b3d63edbce.scope/container/memory.events
Jan 23 19:05:58 compute-0 podman[250189]: 2026-01-23 19:05:58.65785179 +0000 UTC m=+0.154761931 container died 720f768c84fd6cd987b19a8e4ff2f5166d0e2fdd813c609f5245e6b3d63edbce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_robinson, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 19:05:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 23 19:05:58 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:05:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 23 19:05:58 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 23 19:05:58 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 23 19:05:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-991aa039e3924a71fa091143085c6ffd1bfcf254cbadf4f9bf1eb74e4935135e-merged.mount: Deactivated successfully.
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:58 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "58ccdce1-cbc8-4922-8f55-95da428422fa", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:05:58 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "58ccdce1-cbc8-4922-8f55-95da428422fa", "format": "json"}]: dispatch
Jan 23 19:05:58 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:05:58 compute-0 ceph-mon[75097]: pgmap v1079: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 74 KiB/s wr, 7 op/s
Jan 23 19:05:58 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:05:58 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 23 19:05:58 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 23 19:05:58 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:05:58 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:05:58 compute-0 podman[250189]: 2026-01-23 19:05:58.740113179 +0000 UTC m=+0.237023320 container remove 720f768c84fd6cd987b19a8e4ff2f5166d0e2fdd813c609f5245e6b3d63edbce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_robinson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:05:58 compute-0 systemd[1]: libpod-conmon-720f768c84fd6cd987b19a8e4ff2f5166d0e2fdd813c609f5245e6b3d63edbce.scope: Deactivated successfully.
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7c487935-93a6-4d7a-a6c9-f402d645b4e2", "format": "json"}]: dispatch
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7c487935-93a6-4d7a-a6c9-f402d645b4e2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7c487935-93a6-4d7a-a6c9-f402d645b4e2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:05:58 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:05:58.877+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7c487935-93a6-4d7a-a6c9-f402d645b4e2' of type subvolume
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7c487935-93a6-4d7a-a6c9-f402d645b4e2' of type subvolume
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7c487935-93a6-4d7a-a6c9-f402d645b4e2", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7c487935-93a6-4d7a-a6c9-f402d645b4e2, vol_name:cephfs) < ""
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7c487935-93a6-4d7a-a6c9-f402d645b4e2'' moved to trashcan
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:05:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7c487935-93a6-4d7a-a6c9-f402d645b4e2, vol_name:cephfs) < ""
Jan 23 19:05:58 compute-0 podman[250230]: 2026-01-23 19:05:58.973258493 +0000 UTC m=+0.108818849 container create 4486b29e6e394d80d06f476ae05c8632e444531ad7d6ddc45df5cbf1a4de0646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 19:05:58 compute-0 podman[250230]: 2026-01-23 19:05:58.890898222 +0000 UTC m=+0.026458578 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:05:59 compute-0 systemd[1]: Started libpod-conmon-4486b29e6e394d80d06f476ae05c8632e444531ad7d6ddc45df5cbf1a4de0646.scope.
Jan 23 19:05:59 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7292b23fc3c5c10d28f0f86f4b5f800290a64a7a466735aaa7a4cdf27687374/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7292b23fc3c5c10d28f0f86f4b5f800290a64a7a466735aaa7a4cdf27687374/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7292b23fc3c5c10d28f0f86f4b5f800290a64a7a466735aaa7a4cdf27687374/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7292b23fc3c5c10d28f0f86f4b5f800290a64a7a466735aaa7a4cdf27687374/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:05:59 compute-0 podman[250230]: 2026-01-23 19:05:59.079308974 +0000 UTC m=+0.214869330 container init 4486b29e6e394d80d06f476ae05c8632e444531ad7d6ddc45df5cbf1a4de0646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 19:05:59 compute-0 podman[250230]: 2026-01-23 19:05:59.086796846 +0000 UTC m=+0.222357172 container start 4486b29e6e394d80d06f476ae05c8632e444531ad7d6ddc45df5cbf1a4de0646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:05:59 compute-0 podman[250230]: 2026-01-23 19:05:59.110639399 +0000 UTC m=+0.246199755 container attach 4486b29e6e394d80d06f476ae05c8632e444531ad7d6ddc45df5cbf1a4de0646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_elgamal, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:05:59 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice", "format": "json"}]: dispatch
Jan 23 19:05:59 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7c487935-93a6-4d7a-a6c9-f402d645b4e2", "format": "json"}]: dispatch
Jan 23 19:05:59 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7c487935-93a6-4d7a-a6c9-f402d645b4e2", "force": true, "format": "json"}]: dispatch
Jan 23 19:05:59 compute-0 lvm[250325]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:05:59 compute-0 lvm[250326]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:05:59 compute-0 lvm[250326]: VG ceph_vg1 finished
Jan 23 19:05:59 compute-0 lvm[250325]: VG ceph_vg0 finished
Jan 23 19:05:59 compute-0 lvm[250328]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:05:59 compute-0 lvm[250328]: VG ceph_vg2 finished
Jan 23 19:05:59 compute-0 jovial_elgamal[250246]: {}
Jan 23 19:05:59 compute-0 systemd[1]: libpod-4486b29e6e394d80d06f476ae05c8632e444531ad7d6ddc45df5cbf1a4de0646.scope: Deactivated successfully.
Jan 23 19:05:59 compute-0 systemd[1]: libpod-4486b29e6e394d80d06f476ae05c8632e444531ad7d6ddc45df5cbf1a4de0646.scope: Consumed 1.333s CPU time.
Jan 23 19:05:59 compute-0 podman[250230]: 2026-01-23 19:05:59.916425269 +0000 UTC m=+1.051985615 container died 4486b29e6e394d80d06f476ae05c8632e444531ad7d6ddc45df5cbf1a4de0646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 23 19:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7292b23fc3c5c10d28f0f86f4b5f800290a64a7a466735aaa7a4cdf27687374-merged.mount: Deactivated successfully.
Jan 23 19:06:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:06:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:06:00 compute-0 podman[250230]: 2026-01-23 19:06:00.102370231 +0000 UTC m=+1.237930547 container remove 4486b29e6e394d80d06f476ae05c8632e444531ad7d6ddc45df5cbf1a4de0646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 19:06:00 compute-0 systemd[1]: libpod-conmon-4486b29e6e394d80d06f476ae05c8632e444531ad7d6ddc45df5cbf1a4de0646.scope: Deactivated successfully.
Jan 23 19:06:00 compute-0 sudo[250152]: pam_unix(sudo:session): session closed for user root
Jan 23 19:06:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:06:00 compute-0 nova_compute[240119]: 2026-01-23 19:06:00.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:06:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:06:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:06:00 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:06:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:06:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:06:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:06:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 118 KiB/s wr, 13 op/s
Jan 23 19:06:01 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:06:01 compute-0 sudo[250344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 19:06:01 compute-0 sudo[250344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:06:01 compute-0 sudo[250344]: pam_unix(sudo:session): session closed for user root
Jan 23 19:06:01 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "58ccdce1-cbc8-4922-8f55-95da428422fa", "auth_id": "tempest-cephx-id-1979362240", "tenant_id": "ef50bedea4bc4cfd892c55fa7305015f", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume authorize, sub_name:58ccdce1-cbc8-4922-8f55-95da428422fa, tenant_id:ef50bedea4bc4cfd892c55fa7305015f, vol_name:cephfs) < ""
Jan 23 19:06:01 compute-0 nova_compute[240119]: 2026-01-23 19:06:01.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:06:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} v 0)
Jan 23 19:06:01 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:01 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID tempest-cephx-id-1979362240 with tenant ef50bedea4bc4cfd892c55fa7305015f
Jan 23 19:06:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/58ccdce1-cbc8-4922-8f55-95da428422fa/0fc65e1a-93c9-48b8-acca-6485b788fa73", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_58ccdce1-cbc8-4922-8f55-95da428422fa", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:06:01 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/58ccdce1-cbc8-4922-8f55-95da428422fa/0fc65e1a-93c9-48b8-acca-6485b788fa73", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_58ccdce1-cbc8-4922-8f55-95da428422fa", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:01 compute-0 nova_compute[240119]: 2026-01-23 19:06:01.373 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:06:01 compute-0 nova_compute[240119]: 2026-01-23 19:06:01.373 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:06:01 compute-0 nova_compute[240119]: 2026-01-23 19:06:01.373 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:06:01 compute-0 nova_compute[240119]: 2026-01-23 19:06:01.373 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:06:01 compute-0 nova_compute[240119]: 2026-01-23 19:06:01.374 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:06:01 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/58ccdce1-cbc8-4922-8f55-95da428422fa/0fc65e1a-93c9-48b8-acca-6485b788fa73", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_58ccdce1-cbc8-4922-8f55-95da428422fa", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume authorize, sub_name:58ccdce1-cbc8-4922-8f55-95da428422fa, tenant_id:ef50bedea4bc4cfd892c55fa7305015f, vol_name:cephfs) < ""
Jan 23 19:06:01 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:06:01 compute-0 ceph-mon[75097]: pgmap v1080: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 118 KiB/s wr, 13 op/s
Jan 23 19:06:01 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:06:01 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:01 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/58ccdce1-cbc8-4922-8f55-95da428422fa/0fc65e1a-93c9-48b8-acca-6485b788fa73", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_58ccdce1-cbc8-4922-8f55-95da428422fa", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:01 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/58ccdce1-cbc8-4922-8f55-95da428422fa/0fc65e1a-93c9-48b8-acca-6485b788fa73", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_58ccdce1-cbc8-4922-8f55-95da428422fa", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:06:01 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030329388' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:06:01 compute-0 nova_compute[240119]: 2026-01-23 19:06:01.947 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:06:02 compute-0 nova_compute[240119]: 2026-01-23 19:06:02.122 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:06:02 compute-0 nova_compute[240119]: 2026-01-23 19:06:02.123 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5029MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:06:02 compute-0 nova_compute[240119]: 2026-01-23 19:06:02.124 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:06:02 compute-0 nova_compute[240119]: 2026-01-23 19:06:02.124 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:06:02 compute-0 nova_compute[240119]: 2026-01-23 19:06:02.191 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:06:02 compute-0 nova_compute[240119]: 2026-01-23 19:06:02.192 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:06:02 compute-0 nova_compute[240119]: 2026-01-23 19:06:02.210 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:06:02 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:06:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 23 19:06:02 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:06:02 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice_bob with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:06:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 118 KiB/s wr, 13 op/s
Jan 23 19:06:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:06:02 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "58ccdce1-cbc8-4922-8f55-95da428422fa", "auth_id": "tempest-cephx-id-1979362240", "tenant_id": "ef50bedea4bc4cfd892c55fa7305015f", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:02 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3030329388' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:06:02 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:06:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:06:02 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:02 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:06:02 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1209296174' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:06:02 compute-0 nova_compute[240119]: 2026-01-23 19:06:02.774 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:06:02 compute-0 nova_compute[240119]: 2026-01-23 19:06:02.781 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:06:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:06:03 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:03 compute-0 ceph-mon[75097]: pgmap v1081: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 118 KiB/s wr, 13 op/s
Jan 23 19:06:03 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:03 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:03 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1209296174' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:06:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 109 KiB/s wr, 13 op/s
Jan 23 19:06:04 compute-0 ceph-mon[75097]: pgmap v1082: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 109 KiB/s wr, 13 op/s
Jan 23 19:06:05 compute-0 nova_compute[240119]: 2026-01-23 19:06:05.889 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:06:05 compute-0 nova_compute[240119]: 2026-01-23 19:06:05.890 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:06:05 compute-0 nova_compute[240119]: 2026-01-23 19:06:05.890 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a974fb39-4b39-4d59-84a7-f8f3ffa56b12", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a974fb39-4b39-4d59-84a7-f8f3ffa56b12, vol_name:cephfs) < ""
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a974fb39-4b39-4d59-84a7-f8f3ffa56b12/6132594c-6945-4493-8278-28b4ac14c072'.
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a974fb39-4b39-4d59-84a7-f8f3ffa56b12/.meta.tmp'
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a974fb39-4b39-4d59-84a7-f8f3ffa56b12/.meta.tmp' to config b'/volumes/_nogroup/a974fb39-4b39-4d59-84a7-f8f3ffa56b12/.meta'
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a974fb39-4b39-4d59-84a7-f8f3ffa56b12, vol_name:cephfs) < ""
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a974fb39-4b39-4d59-84a7-f8f3ffa56b12", "format": "json"}]: dispatch
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a974fb39-4b39-4d59-84a7-f8f3ffa56b12, vol_name:cephfs) < ""
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a974fb39-4b39-4d59-84a7-f8f3ffa56b12, vol_name:cephfs) < ""
Jan 23 19:06:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:06:06 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:06:06 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 23 19:06:06 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:06:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 23 19:06:06 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 23 19:06:06 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:06:06 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 109 KiB/s wr, 13 op/s
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "58ccdce1-cbc8-4922-8f55-95da428422fa", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume deauthorize, sub_name:58ccdce1-cbc8-4922-8f55-95da428422fa, vol_name:cephfs) < ""
Jan 23 19:06:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} v 0)
Jan 23 19:06:06 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} v 0)
Jan 23 19:06:06 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} : dispatch
Jan 23 19:06:06 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"}]': finished
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume deauthorize, sub_name:58ccdce1-cbc8-4922-8f55-95da428422fa, vol_name:cephfs) < ""
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "58ccdce1-cbc8-4922-8f55-95da428422fa", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume evict, sub_name:58ccdce1-cbc8-4922-8f55-95da428422fa, vol_name:cephfs) < ""
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1979362240, client_metadata.root=/volumes/_nogroup/58ccdce1-cbc8-4922-8f55-95da428422fa/0fc65e1a-93c9-48b8-acca-6485b788fa73
Jan 23 19:06:06 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=tempest-cephx-id-1979362240,client_metadata.root=/volumes/_nogroup/58ccdce1-cbc8-4922-8f55-95da428422fa/0fc65e1a-93c9-48b8-acca-6485b788fa73],prefix=session evict} (starting...)
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume evict, sub_name:58ccdce1-cbc8-4922-8f55-95da428422fa, vol_name:cephfs) < ""
Jan 23 19:06:06 compute-0 nova_compute[240119]: 2026-01-23 19:06:06.891 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:06:06 compute-0 nova_compute[240119]: 2026-01-23 19:06:06.892 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:06:06 compute-0 nova_compute[240119]: 2026-01-23 19:06:06.893 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:06:06 compute-0 nova_compute[240119]: 2026-01-23 19:06:06.912 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:06:06 compute-0 nova_compute[240119]: 2026-01-23 19:06:06.912 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:06:06 compute-0 nova_compute[240119]: 2026-01-23 19:06:06.913 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:06:06 compute-0 nova_compute[240119]: 2026-01-23 19:06:06.913 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "58ccdce1-cbc8-4922-8f55-95da428422fa", "format": "json"}]: dispatch
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:58ccdce1-cbc8-4922-8f55-95da428422fa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:58ccdce1-cbc8-4922-8f55-95da428422fa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:06:06 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:06.925+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '58ccdce1-cbc8-4922-8f55-95da428422fa' of type subvolume
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '58ccdce1-cbc8-4922-8f55-95da428422fa' of type subvolume
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "58ccdce1-cbc8-4922-8f55-95da428422fa", "force": true, "format": "json"}]: dispatch
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:58ccdce1-cbc8-4922-8f55-95da428422fa, vol_name:cephfs) < ""
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/58ccdce1-cbc8-4922-8f55-95da428422fa'' moved to trashcan
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:06:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:58ccdce1-cbc8-4922-8f55-95da428422fa, vol_name:cephfs) < ""
Jan 23 19:06:07 compute-0 nova_compute[240119]: 2026-01-23 19:06:07.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:06:07 compute-0 nova_compute[240119]: 2026-01-23 19:06:07.367 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:06:07 compute-0 nova_compute[240119]: 2026-01-23 19:06:07.367 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:06:07 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a974fb39-4b39-4d59-84a7-f8f3ffa56b12", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:06:07 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a974fb39-4b39-4d59-84a7-f8f3ffa56b12", "format": "json"}]: dispatch
Jan 23 19:06:07 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:06:07 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:06:07 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 23 19:06:07 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 23 19:06:07 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:06:07 compute-0 ceph-mon[75097]: pgmap v1083: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 109 KiB/s wr, 13 op/s
Jan 23 19:06:07 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:07 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} : dispatch
Jan 23 19:06:07 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"}]': finished
Jan 23 19:06:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:06:08 compute-0 podman[250415]: 2026-01-23 19:06:08.014225466 +0000 UTC m=+0.091867894 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 19:06:08 compute-0 nova_compute[240119]: 2026-01-23 19:06:08.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:06:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 467 B/s rd, 100 KiB/s wr, 12 op/s
Jan 23 19:06:08 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "58ccdce1-cbc8-4922-8f55-95da428422fa", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:08 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "58ccdce1-cbc8-4922-8f55-95da428422fa", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:08 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "58ccdce1-cbc8-4922-8f55-95da428422fa", "format": "json"}]: dispatch
Jan 23 19:06:08 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "58ccdce1-cbc8-4922-8f55-95da428422fa", "force": true, "format": "json"}]: dispatch
Jan 23 19:06:09 compute-0 ceph-mon[75097]: pgmap v1084: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 467 B/s rd, 100 KiB/s wr, 12 op/s
Jan 23 19:06:09 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e1861ac0-d15e-48e7-8018-9e451a8b3dee", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:06:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e1861ac0-d15e-48e7-8018-9e451a8b3dee, vol_name:cephfs) < ""
Jan 23 19:06:09 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/e1861ac0-d15e-48e7-8018-9e451a8b3dee/8d49cc89-ff1c-401b-8222-1d3cb91f7a28'.
Jan 23 19:06:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e1861ac0-d15e-48e7-8018-9e451a8b3dee/.meta.tmp'
Jan 23 19:06:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e1861ac0-d15e-48e7-8018-9e451a8b3dee/.meta.tmp' to config b'/volumes/_nogroup/e1861ac0-d15e-48e7-8018-9e451a8b3dee/.meta'
Jan 23 19:06:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e1861ac0-d15e-48e7-8018-9e451a8b3dee, vol_name:cephfs) < ""
Jan 23 19:06:09 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e1861ac0-d15e-48e7-8018-9e451a8b3dee", "format": "json"}]: dispatch
Jan 23 19:06:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e1861ac0-d15e-48e7-8018-9e451a8b3dee, vol_name:cephfs) < ""
Jan 23 19:06:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e1861ac0-d15e-48e7-8018-9e451a8b3dee, vol_name:cephfs) < ""
Jan 23 19:06:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:06:09 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:06:09 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a974fb39-4b39-4d59-84a7-f8f3ffa56b12", "snap_name": "cdd68b8f-be76-4491-bcdc-990526be471f", "format": "json"}]: dispatch
Jan 23 19:06:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cdd68b8f-be76-4491-bcdc-990526be471f, sub_name:a974fb39-4b39-4d59-84a7-f8f3ffa56b12, vol_name:cephfs) < ""
Jan 23 19:06:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cdd68b8f-be76-4491-bcdc-990526be471f, sub_name:a974fb39-4b39-4d59-84a7-f8f3ffa56b12, vol_name:cephfs) < ""
Jan 23 19:06:10 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:06:10 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:06:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 23 19:06:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:06:10 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice_bob with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:06:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:06:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:10 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:06:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 138 KiB/s wr, 16 op/s
Jan 23 19:06:10 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e1861ac0-d15e-48e7-8018-9e451a8b3dee", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:06:10 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e1861ac0-d15e-48e7-8018-9e451a8b3dee", "format": "json"}]: dispatch
Jan 23 19:06:10 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:06:10 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a974fb39-4b39-4d59-84a7-f8f3ffa56b12", "snap_name": "cdd68b8f-be76-4491-bcdc-990526be471f", "format": "json"}]: dispatch
Jan 23 19:06:10 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:06:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:06:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:10 compute-0 ceph-mon[75097]: pgmap v1085: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 138 KiB/s wr, 16 op/s
Jan 23 19:06:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 76 KiB/s wr, 9 op/s
Jan 23 19:06:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:06:12 compute-0 ceph-mon[75097]: pgmap v1086: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 76 KiB/s wr, 9 op/s
Jan 23 19:06:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:06:13.586 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:06:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:06:13.587 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:06:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:06:13.587 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:06:13 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "e1861ac0-d15e-48e7-8018-9e451a8b3dee", "auth_id": "tempest-cephx-id-1979362240", "tenant_id": "ef50bedea4bc4cfd892c55fa7305015f", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume authorize, sub_name:e1861ac0-d15e-48e7-8018-9e451a8b3dee, tenant_id:ef50bedea4bc4cfd892c55fa7305015f, vol_name:cephfs) < ""
Jan 23 19:06:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} v 0)
Jan 23 19:06:13 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:13 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID tempest-cephx-id-1979362240 with tenant ef50bedea4bc4cfd892c55fa7305015f
Jan 23 19:06:13 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/e1861ac0-d15e-48e7-8018-9e451a8b3dee/8d49cc89-ff1c-401b-8222-1d3cb91f7a28", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_e1861ac0-d15e-48e7-8018-9e451a8b3dee", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:06:13 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/e1861ac0-d15e-48e7-8018-9e451a8b3dee/8d49cc89-ff1c-401b-8222-1d3cb91f7a28", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_e1861ac0-d15e-48e7-8018-9e451a8b3dee", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:13 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/e1861ac0-d15e-48e7-8018-9e451a8b3dee/8d49cc89-ff1c-401b-8222-1d3cb91f7a28", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_e1861ac0-d15e-48e7-8018-9e451a8b3dee", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume authorize, sub_name:e1861ac0-d15e-48e7-8018-9e451a8b3dee, tenant_id:ef50bedea4bc4cfd892c55fa7305015f, vol_name:cephfs) < ""
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "a974fb39-4b39-4d59-84a7-f8f3ffa56b12", "snap_name": "cdd68b8f-be76-4491-bcdc-990526be471f", "target_sub_name": "2372c43f-e00a-4e92-b338-82e4882d6cb1", "format": "json"}]: dispatch
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:cdd68b8f-be76-4491-bcdc-990526be471f, sub_name:a974fb39-4b39-4d59-84a7-f8f3ffa56b12, target_sub_name:2372c43f-e00a-4e92-b338-82e4882d6cb1, vol_name:cephfs) < ""
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/2372c43f-e00a-4e92-b338-82e4882d6cb1/f3b88e5e-db71-4c1b-85c2-3d20d6b3d0d4'.
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/2372c43f-e00a-4e92-b338-82e4882d6cb1/.meta.tmp'
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2372c43f-e00a-4e92-b338-82e4882d6cb1/.meta.tmp' to config b'/volumes/_nogroup/2372c43f-e00a-4e92-b338-82e4882d6cb1/.meta'
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 7b010406-42cb-4270-855c-c03b614b4f32 for path b'/volumes/_nogroup/2372c43f-e00a-4e92-b338-82e4882d6cb1'
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/a974fb39-4b39-4d59-84a7-f8f3ffa56b12/.meta.tmp'
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a974fb39-4b39-4d59-84a7-f8f3ffa56b12/.meta.tmp' to config b'/volumes/_nogroup/a974fb39-4b39-4d59-84a7-f8f3ffa56b12/.meta'
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:cdd68b8f-be76-4491-bcdc-990526be471f, sub_name:a974fb39-4b39-4d59-84a7-f8f3ffa56b12, target_sub_name:2372c43f-e00a-4e92-b338-82e4882d6cb1, vol_name:cephfs) < ""
Jan 23 19:06:14 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:14.220+0000 7f06ac529640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:14.220+0000 7f06ac529640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:14.220+0000 7f06ac529640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:14.220+0000 7f06ac529640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:14.220+0000 7f06ac529640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2372c43f-e00a-4e92-b338-82e4882d6cb1", "format": "json"}]: dispatch
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2372c43f-e00a-4e92-b338-82e4882d6cb1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/2372c43f-e00a-4e92-b338-82e4882d6cb1
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 2372c43f-e00a-4e92-b338-82e4882d6cb1)
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2372c43f-e00a-4e92-b338-82e4882d6cb1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:06:14 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:14.242+0000 7f06add2c640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:14.242+0000 7f06add2c640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:14.242+0000 7f06add2c640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:14.242+0000 7f06add2c640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:14.242+0000 7f06add2c640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 2372c43f-e00a-4e92-b338-82e4882d6cb1) -- by 0 seconds
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/2372c43f-e00a-4e92-b338-82e4882d6cb1/.meta.tmp'
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2372c43f-e00a-4e92-b338-82e4882d6cb1/.meta.tmp' to config b'/volumes/_nogroup/2372c43f-e00a-4e92-b338-82e4882d6cb1/.meta'
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 120 KiB/s wr, 14 op/s
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:06:14 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:14 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "e1861ac0-d15e-48e7-8018-9e451a8b3dee", "auth_id": "tempest-cephx-id-1979362240", "tenant_id": "ef50bedea4bc4cfd892c55fa7305015f", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:14 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/e1861ac0-d15e-48e7-8018-9e451a8b3dee/8d49cc89-ff1c-401b-8222-1d3cb91f7a28", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_e1861ac0-d15e-48e7-8018-9e451a8b3dee", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:14 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/e1861ac0-d15e-48e7-8018-9e451a8b3dee/8d49cc89-ff1c-401b-8222-1d3cb91f7a28", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_e1861ac0-d15e-48e7-8018-9e451a8b3dee", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:14 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "a974fb39-4b39-4d59-84a7-f8f3ffa56b12", "snap_name": "cdd68b8f-be76-4491-bcdc-990526be471f", "target_sub_name": "2372c43f-e00a-4e92-b338-82e4882d6cb1", "format": "json"}]: dispatch
Jan 23 19:06:14 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2372c43f-e00a-4e92-b338-82e4882d6cb1", "format": "json"}]: dispatch
Jan 23 19:06:14 compute-0 ceph-mon[75097]: pgmap v1087: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 120 KiB/s wr, 14 op/s
Jan 23 19:06:15 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:15.223+0000 7f0683a0a640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:15 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:15 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:15.223+0000 7f0683a0a640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:15 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:15 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:15.223+0000 7f0683a0a640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:15 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:15 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:15 compute-0 ceph-mgr[75390]: client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:15 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:15.223+0000 7f0683a0a640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:15 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:15.223+0000 7f0683a0a640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 19:06:15 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/a974fb39-4b39-4d59-84a7-f8f3ffa56b12/.snap/cdd68b8f-be76-4491-bcdc-990526be471f/6132594c-6945-4493-8278-28b4ac14c072' to b'/volumes/_nogroup/2372c43f-e00a-4e92-b338-82e4882d6cb1/f3b88e5e-db71-4c1b-85c2-3d20d6b3d0d4'
Jan 23 19:06:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 23 19:06:15 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:06:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 23 19:06:15 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 23 19:06:15 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 23 19:06:15 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:15 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:06:15 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:15 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/2372c43f-e00a-4e92-b338-82e4882d6cb1/.meta.tmp'
Jan 23 19:06:15 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2372c43f-e00a-4e92-b338-82e4882d6cb1/.meta.tmp' to config b'/volumes/_nogroup/2372c43f-e00a-4e92-b338-82e4882d6cb1/.meta'
Jan 23 19:06:15 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:06:15 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:06:15 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:06:15 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:15 compute-0 podman[250479]: 2026-01-23 19:06:15.960033274 +0000 UTC m=+0.046778223 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 19:06:16 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:06:16 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 23 19:06:16 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 23 19:06:16 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 23 19:06:16 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 23 19:06:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.clone_index] untracking 7b010406-42cb-4270-855c-c03b614b4f32
Jan 23 19:06:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a974fb39-4b39-4d59-84a7-f8f3ffa56b12/.meta.tmp'
Jan 23 19:06:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a974fb39-4b39-4d59-84a7-f8f3ffa56b12/.meta.tmp' to config b'/volumes/_nogroup/a974fb39-4b39-4d59-84a7-f8f3ffa56b12/.meta'
Jan 23 19:06:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/2372c43f-e00a-4e92-b338-82e4882d6cb1/.meta.tmp'
Jan 23 19:06:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2372c43f-e00a-4e92-b338-82e4882d6cb1/.meta.tmp' to config b'/volumes/_nogroup/2372c43f-e00a-4e92-b338-82e4882d6cb1/.meta'
Jan 23 19:06:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 2372c43f-e00a-4e92-b338-82e4882d6cb1)
Jan 23 19:06:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.stats_util] Exception VolumeException was raised. Apparently an entry from the metadata file of clone source was removed because one of the clone job(s) has completed/cancelled. Therefore ignoring and proceeding Printing the exception: -22 (error fetching subvolume metadata)
Jan 23 19:06:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Jan 23 19:06:16 compute-0 ceph-mgr[75390]: [progress WARNING root] complete: ev mgr-vol-ongoing-clones does not exist
Jan 23 19:06:16 compute-0 ceph-mgr[75390]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Jan 23 19:06:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Jan 23 19:06:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7f06b9d6fb50>
Jan 23 19:06:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 91 KiB/s wr, 10 op/s
Jan 23 19:06:17 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:17 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:06:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 23 19:06:17 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:06:17 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice bob with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:06:17 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.rszfwe(active, since 31m)
Jan 23 19:06:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:06:17 compute-0 ceph-mon[75097]: pgmap v1088: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 91 KiB/s wr, 10 op/s
Jan 23 19:06:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:06:17 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:17 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:18 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:06:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 91 KiB/s wr, 10 op/s
Jan 23 19:06:18 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:06:18 compute-0 ceph-mon[75097]: mgrmap e14: compute-0.rszfwe(active, since 31m)
Jan 23 19:06:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:18 compute-0 ceph-mon[75097]: pgmap v1089: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 91 KiB/s wr, 10 op/s
Jan 23 19:06:19 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "e1861ac0-d15e-48e7-8018-9e451a8b3dee", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume deauthorize, sub_name:e1861ac0-d15e-48e7-8018-9e451a8b3dee, vol_name:cephfs) < ""
Jan 23 19:06:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} v 0)
Jan 23 19:06:19 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} v 0)
Jan 23 19:06:19 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} : dispatch
Jan 23 19:06:19 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"}]': finished
Jan 23 19:06:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume deauthorize, sub_name:e1861ac0-d15e-48e7-8018-9e451a8b3dee, vol_name:cephfs) < ""
Jan 23 19:06:19 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "e1861ac0-d15e-48e7-8018-9e451a8b3dee", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume evict, sub_name:e1861ac0-d15e-48e7-8018-9e451a8b3dee, vol_name:cephfs) < ""
Jan 23 19:06:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1979362240, client_metadata.root=/volumes/_nogroup/e1861ac0-d15e-48e7-8018-9e451a8b3dee/8d49cc89-ff1c-401b-8222-1d3cb91f7a28
Jan 23 19:06:19 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=tempest-cephx-id-1979362240,client_metadata.root=/volumes/_nogroup/e1861ac0-d15e-48e7-8018-9e451a8b3dee/8d49cc89-ff1c-401b-8222-1d3cb91f7a28],prefix=session evict} (starting...)
Jan 23 19:06:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:06:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume evict, sub_name:e1861ac0-d15e-48e7-8018-9e451a8b3dee, vol_name:cephfs) < ""
Jan 23 19:06:19 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e1861ac0-d15e-48e7-8018-9e451a8b3dee", "format": "json"}]: dispatch
Jan 23 19:06:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e1861ac0-d15e-48e7-8018-9e451a8b3dee, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:06:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e1861ac0-d15e-48e7-8018-9e451a8b3dee, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:06:19 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:19.343+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e1861ac0-d15e-48e7-8018-9e451a8b3dee' of type subvolume
Jan 23 19:06:19 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e1861ac0-d15e-48e7-8018-9e451a8b3dee' of type subvolume
Jan 23 19:06:19 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e1861ac0-d15e-48e7-8018-9e451a8b3dee", "force": true, "format": "json"}]: dispatch
Jan 23 19:06:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e1861ac0-d15e-48e7-8018-9e451a8b3dee, vol_name:cephfs) < ""
Jan 23 19:06:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e1861ac0-d15e-48e7-8018-9e451a8b3dee'' moved to trashcan
Jan 23 19:06:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:06:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e1861ac0-d15e-48e7-8018-9e451a8b3dee, vol_name:cephfs) < ""
Jan 23 19:06:20 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "e1861ac0-d15e-48e7-8018-9e451a8b3dee", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:20 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:20 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} : dispatch
Jan 23 19:06:20 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"}]': finished
Jan 23 19:06:20 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "e1861ac0-d15e-48e7-8018-9e451a8b3dee", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:20 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e1861ac0-d15e-48e7-8018-9e451a8b3dee", "format": "json"}]: dispatch
Jan 23 19:06:20 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e1861ac0-d15e-48e7-8018-9e451a8b3dee", "force": true, "format": "json"}]: dispatch
Jan 23 19:06:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 157 KiB/s wr, 19 op/s
Jan 23 19:06:20 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "tenant_id": "ef50bedea4bc4cfd892c55fa7305015f", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume authorize, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, tenant_id:ef50bedea4bc4cfd892c55fa7305015f, vol_name:cephfs) < ""
Jan 23 19:06:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} v 0)
Jan 23 19:06:20 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:20 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID tempest-cephx-id-1979362240 with tenant ef50bedea4bc4cfd892c55fa7305015f
Jan 23 19:06:21 compute-0 ceph-mon[75097]: pgmap v1090: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 157 KiB/s wr, 19 op/s
Jan 23 19:06:21 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:06:21 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:21 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:21 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume authorize, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, tenant_id:ef50bedea4bc4cfd892c55fa7305015f, vol_name:cephfs) < ""
Jan 23 19:06:22 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:06:22 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 23 19:06:22 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:06:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 23 19:06:22 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 23 19:06:22 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 23 19:06:22 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:22 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:06:22 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:22 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:06:22 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:06:22 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:06:22 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 110 KiB/s wr, 14 op/s
Jan 23 19:06:22 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "tenant_id": "ef50bedea4bc4cfd892c55fa7305015f", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:22 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:22 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:22 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:06:22 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:06:22 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 23 19:06:22 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 23 19:06:22 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:06:22 compute-0 ceph-mon[75097]: pgmap v1091: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 110 KiB/s wr, 14 op/s
Jan 23 19:06:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:06:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 62 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 138 KiB/s wr, 19 op/s
Jan 23 19:06:24 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:06:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:06:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 23 19:06:24 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:06:24 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID alice bob with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:06:24 compute-0 ceph-mon[75097]: pgmap v1092: 305 pgs: 305 active+clean; 62 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 138 KiB/s wr, 19 op/s
Jan 23 19:06:24 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:06:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:06:24 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:24 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:06:25 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume deauthorize, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:06:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} v 0)
Jan 23 19:06:25 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} v 0)
Jan 23 19:06:25 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} : dispatch
Jan 23 19:06:25 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"}]': finished
Jan 23 19:06:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume deauthorize, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:06:25 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume evict, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:06:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1979362240, client_metadata.root=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4
Jan 23 19:06:25 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=tempest-cephx-id-1979362240,client_metadata.root=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4],prefix=session evict} (starting...)
Jan 23 19:06:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:06:25 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume evict, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:06:26 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "r", "format": "json"}]: dispatch
Jan 23 19:06:26 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:26 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:26 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:26 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:26 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} : dispatch
Jan 23 19:06:26 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"}]': finished
Jan 23 19:06:26 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 62 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 94 KiB/s wr, 13 op/s
Jan 23 19:06:26 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "137cf5f0-949a-449f-82d0-ceae6fdaaa3e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:06:26 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:137cf5f0-949a-449f-82d0-ceae6fdaaa3e, vol_name:cephfs) < ""
Jan 23 19:06:27 compute-0 ceph-mon[75097]: pgmap v1093: 305 pgs: 305 active+clean; 62 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 94 KiB/s wr, 13 op/s
Jan 23 19:06:27 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/137cf5f0-949a-449f-82d0-ceae6fdaaa3e/6215fdcb-384f-465a-8c54-7825a9e0a2fc'.
Jan 23 19:06:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:06:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/137cf5f0-949a-449f-82d0-ceae6fdaaa3e/.meta.tmp'
Jan 23 19:06:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/137cf5f0-949a-449f-82d0-ceae6fdaaa3e/.meta.tmp' to config b'/volumes/_nogroup/137cf5f0-949a-449f-82d0-ceae6fdaaa3e/.meta'
Jan 23 19:06:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:137cf5f0-949a-449f-82d0-ceae6fdaaa3e, vol_name:cephfs) < ""
Jan 23 19:06:28 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "137cf5f0-949a-449f-82d0-ceae6fdaaa3e", "format": "json"}]: dispatch
Jan 23 19:06:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:137cf5f0-949a-449f-82d0-ceae6fdaaa3e, vol_name:cephfs) < ""
Jan 23 19:06:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:137cf5f0-949a-449f-82d0-ceae6fdaaa3e, vol_name:cephfs) < ""
Jan 23 19:06:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:06:28 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:06:28 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:06:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 23 19:06:28 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:06:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 23 19:06:28 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 23 19:06:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 62 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 94 KiB/s wr, 13 op/s
Jan 23 19:06:28 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 23 19:06:28 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "137cf5f0-949a-449f-82d0-ceae6fdaaa3e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:06:28 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:06:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 23 19:06:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 23 19:06:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:28 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:06:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:06:28 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:06:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:06:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:06:28
Jan 23 19:06:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:06:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:06:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['images', 'volumes', 'vms', 'backups', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log']
Jan 23 19:06:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:06:29 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "tenant_id": "ef50bedea4bc4cfd892c55fa7305015f", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:29 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume authorize, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, tenant_id:ef50bedea4bc4cfd892c55fa7305015f, vol_name:cephfs) < ""
Jan 23 19:06:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} v 0)
Jan 23 19:06:29 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:29 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID tempest-cephx-id-1979362240 with tenant ef50bedea4bc4cfd892c55fa7305015f
Jan 23 19:06:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:06:29 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:29 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:29 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume authorize, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, tenant_id:ef50bedea4bc4cfd892c55fa7305015f, vol_name:cephfs) < ""
Jan 23 19:06:29 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "137cf5f0-949a-449f-82d0-ceae6fdaaa3e", "format": "json"}]: dispatch
Jan 23 19:06:29 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:06:29 compute-0 ceph-mon[75097]: pgmap v1094: 305 pgs: 305 active+clean; 62 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 94 KiB/s wr, 13 op/s
Jan 23 19:06:29 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 23 19:06:29 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 23 19:06:29 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "tenant_id": "ef50bedea4bc4cfd892c55fa7305015f", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:29 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:29 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:29 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:06:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:06:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:06:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:06:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:06:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:06:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 155 KiB/s wr, 20 op/s
Jan 23 19:06:30 compute-0 ceph-mon[75097]: pgmap v1095: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 155 KiB/s wr, 20 op/s
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "137cf5f0-949a-449f-82d0-ceae6fdaaa3e", "format": "json"}]: dispatch
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:137cf5f0-949a-449f-82d0-ceae6fdaaa3e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:137cf5f0-949a-449f-82d0-ceae6fdaaa3e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:06:31 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:31.458+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '137cf5f0-949a-449f-82d0-ceae6fdaaa3e' of type subvolume
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '137cf5f0-949a-449f-82d0-ceae6fdaaa3e' of type subvolume
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "137cf5f0-949a-449f-82d0-ceae6fdaaa3e", "force": true, "format": "json"}]: dispatch
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:137cf5f0-949a-449f-82d0-ceae6fdaaa3e, vol_name:cephfs) < ""
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/137cf5f0-949a-449f-82d0-ceae6fdaaa3e'' moved to trashcan
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:137cf5f0-949a-449f-82d0-ceae6fdaaa3e, vol_name:cephfs) < ""
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:06:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Jan 23 19:06:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 23 19:06:31 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID bob with tenant 1bedbf7265dc42cf859f02ee7f9194c5
Jan 23 19:06:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:06:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:31 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "137cf5f0-949a-449f-82d0-ceae6fdaaa3e", "format": "json"}]: dispatch
Jan 23 19:06:31 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "137cf5f0-949a-449f-82d0-ceae6fdaaa3e", "force": true, "format": "json"}]: dispatch
Jan 23 19:06:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 23 19:06:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:06:32 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume deauthorize, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:06:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} v 0)
Jan 23 19:06:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} v 0)
Jan 23 19:06:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} : dispatch
Jan 23 19:06:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"}]': finished
Jan 23 19:06:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 89 KiB/s wr, 11 op/s
Jan 23 19:06:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume deauthorize, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:06:32 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume evict, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:06:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1979362240, client_metadata.root=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4
Jan 23 19:06:32 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=tempest-cephx-id-1979362240,client_metadata.root=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4],prefix=session evict} (starting...)
Jan 23 19:06:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:06:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume evict, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:06:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:06:33 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:33 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} : dispatch
Jan 23 19:06:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"}]': finished
Jan 23 19:06:33 compute-0 ceph-mon[75097]: pgmap v1096: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 89 KiB/s wr, 11 op/s
Jan 23 19:06:34 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 143 KiB/s wr, 17 op/s
Jan 23 19:06:34 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4ab0802a-2a55-43b2-9e63-a0e480add7c1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:06:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4ab0802a-2a55-43b2-9e63-a0e480add7c1, vol_name:cephfs) < ""
Jan 23 19:06:35 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/4ab0802a-2a55-43b2-9e63-a0e480add7c1/a8f40bcc-fe1e-443d-bbfb-b54817d514cc'.
Jan 23 19:06:35 compute-0 ceph-mon[75097]: pgmap v1097: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 143 KiB/s wr, 17 op/s
Jan 23 19:06:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4ab0802a-2a55-43b2-9e63-a0e480add7c1/.meta.tmp'
Jan 23 19:06:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4ab0802a-2a55-43b2-9e63-a0e480add7c1/.meta.tmp' to config b'/volumes/_nogroup/4ab0802a-2a55-43b2-9e63-a0e480add7c1/.meta'
Jan 23 19:06:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4ab0802a-2a55-43b2-9e63-a0e480add7c1, vol_name:cephfs) < ""
Jan 23 19:06:35 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4ab0802a-2a55-43b2-9e63-a0e480add7c1", "format": "json"}]: dispatch
Jan 23 19:06:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4ab0802a-2a55-43b2-9e63-a0e480add7c1, vol_name:cephfs) < ""
Jan 23 19:06:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4ab0802a-2a55-43b2-9e63-a0e480add7c1, vol_name:cephfs) < ""
Jan 23 19:06:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:06:35 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:06:35 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fba5485d-e5a8-4974-84d8-e4e56ba346a4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:06:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fba5485d-e5a8-4974-84d8-e4e56ba346a4, vol_name:cephfs) < ""
Jan 23 19:06:36 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/fba5485d-e5a8-4974-84d8-e4e56ba346a4/9aee36c1-5f50-49fa-9e8e-df410133e8e1'.
Jan 23 19:06:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fba5485d-e5a8-4974-84d8-e4e56ba346a4/.meta.tmp'
Jan 23 19:06:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fba5485d-e5a8-4974-84d8-e4e56ba346a4/.meta.tmp' to config b'/volumes/_nogroup/fba5485d-e5a8-4974-84d8-e4e56ba346a4/.meta'
Jan 23 19:06:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fba5485d-e5a8-4974-84d8-e4e56ba346a4, vol_name:cephfs) < ""
Jan 23 19:06:36 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fba5485d-e5a8-4974-84d8-e4e56ba346a4", "format": "json"}]: dispatch
Jan 23 19:06:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fba5485d-e5a8-4974-84d8-e4e56ba346a4, vol_name:cephfs) < ""
Jan 23 19:06:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fba5485d-e5a8-4974-84d8-e4e56ba346a4, vol_name:cephfs) < ""
Jan 23 19:06:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:06:36 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:06:36 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "tenant_id": "ef50bedea4bc4cfd892c55fa7305015f", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume authorize, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, tenant_id:ef50bedea4bc4cfd892c55fa7305015f, vol_name:cephfs) < ""
Jan 23 19:06:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} v 0)
Jan 23 19:06:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:36 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID tempest-cephx-id-1979362240 with tenant ef50bedea4bc4cfd892c55fa7305015f
Jan 23 19:06:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 115 KiB/s wr, 12 op/s
Jan 23 19:06:36 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4ab0802a-2a55-43b2-9e63-a0e480add7c1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:06:36 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4ab0802a-2a55-43b2-9e63-a0e480add7c1", "format": "json"}]: dispatch
Jan 23 19:06:36 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:06:36 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fba5485d-e5a8-4974-84d8-e4e56ba346a4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:06:36 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fba5485d-e5a8-4974-84d8-e4e56ba346a4", "format": "json"}]: dispatch
Jan 23 19:06:36 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:06:36 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "tenant_id": "ef50bedea4bc4cfd892c55fa7305015f", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:36 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:36 compute-0 ceph-mon[75097]: pgmap v1098: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 115 KiB/s wr, 12 op/s
Jan 23 19:06:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:06:37 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:37 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:37 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume authorize, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, tenant_id:ef50bedea4bc4cfd892c55fa7305015f, vol_name:cephfs) < ""
Jan 23 19:06:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:06:38 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:38 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 115 KiB/s wr, 12 op/s
Jan 23 19:06:38 compute-0 podman[250502]: 2026-01-23 19:06:38.994535383 +0000 UTC m=+0.077438032 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4ab0802a-2a55-43b2-9e63-a0e480add7c1", "format": "json"}]: dispatch
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4ab0802a-2a55-43b2-9e63-a0e480add7c1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4ab0802a-2a55-43b2-9e63-a0e480add7c1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4ab0802a-2a55-43b2-9e63-a0e480add7c1' of type subvolume
Jan 23 19:06:39 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:39.132+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4ab0802a-2a55-43b2-9e63-a0e480add7c1' of type subvolume
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4ab0802a-2a55-43b2-9e63-a0e480add7c1", "force": true, "format": "json"}]: dispatch
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4ab0802a-2a55-43b2-9e63-a0e480add7c1, vol_name:cephfs) < ""
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4ab0802a-2a55-43b2-9e63-a0e480add7c1'' moved to trashcan
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4ab0802a-2a55-43b2-9e63-a0e480add7c1, vol_name:cephfs) < ""
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume deauthorize, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:06:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} v 0)
Jan 23 19:06:39 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} v 0)
Jan 23 19:06:39 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} : dispatch
Jan 23 19:06:39 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"}]': finished
Jan 23 19:06:39 compute-0 ceph-mon[75097]: pgmap v1099: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 115 KiB/s wr, 12 op/s
Jan 23 19:06:39 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:39 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} : dispatch
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume deauthorize, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume evict, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1979362240, client_metadata.root=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4
Jan 23 19:06:39 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=tempest-cephx-id-1979362240,client_metadata.root=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4],prefix=session evict} (starting...)
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume evict, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fba5485d-e5a8-4974-84d8-e4e56ba346a4", "auth_id": "bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:fba5485d-e5a8-4974-84d8-e4e56ba346a4, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:06:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Jan 23 19:06:39 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 23 19:06:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363,allow rw path=/volumes/_nogroup/fba5485d-e5a8-4974-84d8-e4e56ba346a4/9aee36c1-5f50-49fa-9e8e-df410133e8e1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fba5485d-e5a8-4974-84d8-e4e56ba346a4"]} v 0)
Jan 23 19:06:40 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363,allow rw path=/volumes/_nogroup/fba5485d-e5a8-4974-84d8-e4e56ba346a4/9aee36c1-5f50-49fa-9e8e-df410133e8e1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fba5485d-e5a8-4974-84d8-e4e56ba346a4"]} : dispatch
Jan 23 19:06:40 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363,allow rw path=/volumes/_nogroup/fba5485d-e5a8-4974-84d8-e4e56ba346a4/9aee36c1-5f50-49fa-9e8e-df410133e8e1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fba5485d-e5a8-4974-84d8-e4e56ba346a4"]}]': finished
Jan 23 19:06:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Jan 23 19:06:40 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006658993654126277 of space, bias 1.0, pg target 0.19976980962378832 quantized to 32 (current 32)
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0003535632642520051 of space, bias 4.0, pg target 0.4242759171024061 quantized to 16 (current 16)
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00017169491111545225 quantized to 32 (current 32)
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:fba5485d-e5a8-4974-84d8-e4e56ba346a4, tenant_id:1bedbf7265dc42cf859f02ee7f9194c5, vol_name:cephfs) < ""
Jan 23 19:06:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 154 KiB/s wr, 60 op/s
Jan 23 19:06:40 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4ab0802a-2a55-43b2-9e63-a0e480add7c1", "format": "json"}]: dispatch
Jan 23 19:06:40 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4ab0802a-2a55-43b2-9e63-a0e480add7c1", "force": true, "format": "json"}]: dispatch
Jan 23 19:06:40 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"}]': finished
Jan 23 19:06:40 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:40 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fba5485d-e5a8-4974-84d8-e4e56ba346a4", "auth_id": "bob", "tenant_id": "1bedbf7265dc42cf859f02ee7f9194c5", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 23 19:06:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363,allow rw path=/volumes/_nogroup/fba5485d-e5a8-4974-84d8-e4e56ba346a4/9aee36c1-5f50-49fa-9e8e-df410133e8e1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fba5485d-e5a8-4974-84d8-e4e56ba346a4"]} : dispatch
Jan 23 19:06:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363,allow rw path=/volumes/_nogroup/fba5485d-e5a8-4974-84d8-e4e56ba346a4/9aee36c1-5f50-49fa-9e8e-df410133e8e1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fba5485d-e5a8-4974-84d8-e4e56ba346a4"]}]': finished
Jan 23 19:06:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 23 19:06:40 compute-0 ceph-mon[75097]: pgmap v1100: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 154 KiB/s wr, 60 op/s
Jan 23 19:06:42 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "tenant_id": "ef50bedea4bc4cfd892c55fa7305015f", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume authorize, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, tenant_id:ef50bedea4bc4cfd892c55fa7305015f, vol_name:cephfs) < ""
Jan 23 19:06:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} v 0)
Jan 23 19:06:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:42 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: Creating meta for ID tempest-cephx-id-1979362240 with tenant ef50bedea4bc4cfd892c55fa7305015f
Jan 23 19:06:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"} v 0)
Jan 23 19:06:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"} : dispatch
Jan 23 19:06:42 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1979362240", "caps": ["mds", "allow rw path=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "mon", "allow r"], "format": "json"}]': finished
Jan 23 19:06:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 93 KiB/s wr, 53 op/s
Jan 23 19:06:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume authorize, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, tenant_id:ef50bedea4bc4cfd892c55fa7305015f, vol_name:cephfs) < ""
Jan 23 19:06:42 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fba5485d-e5a8-4974-84d8-e4e56ba346a4", "auth_id": "bob", "format": "json"}]: dispatch
Jan 23 19:06:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:fba5485d-e5a8-4974-84d8-e4e56ba346a4, vol_name:cephfs) < ""
Jan 23 19:06:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Jan 23 19:06:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 23 19:06:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6"]} v 0)
Jan 23 19:06:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6"]} : dispatch
Jan 23 19:06:42 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6"]}]': finished
Jan 23 19:06:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:fba5485d-e5a8-4974-84d8-e4e56ba346a4, vol_name:cephfs) < ""
Jan 23 19:06:42 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fba5485d-e5a8-4974-84d8-e4e56ba346a4", "auth_id": "bob", "format": "json"}]: dispatch
Jan 23 19:06:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:fba5485d-e5a8-4974-84d8-e4e56ba346a4, vol_name:cephfs) < ""
Jan 23 19:06:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/fba5485d-e5a8-4974-84d8-e4e56ba346a4/9aee36c1-5f50-49fa-9e8e-df410133e8e1
Jan 23 19:06:42 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/fba5485d-e5a8-4974-84d8-e4e56ba346a4/9aee36c1-5f50-49fa-9e8e-df410133e8e1],prefix=session evict} (starting...)
Jan 23 19:06:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:06:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:fba5485d-e5a8-4974-84d8-e4e56ba346a4, vol_name:cephfs) < ""
Jan 23 19:06:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:06:43 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "tenant_id": "ef50bedea4bc4cfd892c55fa7305015f", "access_level": "rw", "format": "json"}]: dispatch
Jan 23 19:06:43 compute-0 ceph-mon[75097]: pgmap v1101: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 93 KiB/s wr, 53 op/s
Jan 23 19:06:43 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 23 19:06:43 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6"]} : dispatch
Jan 23 19:06:43 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6"]}]': finished
Jan 23 19:06:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 164 KiB/s wr, 93 op/s
Jan 23 19:06:44 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fba5485d-e5a8-4974-84d8-e4e56ba346a4", "auth_id": "bob", "format": "json"}]: dispatch
Jan 23 19:06:44 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fba5485d-e5a8-4974-84d8-e4e56ba346a4", "auth_id": "bob", "format": "json"}]: dispatch
Jan 23 19:06:45 compute-0 ceph-mon[75097]: pgmap v1102: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 164 KiB/s wr, 93 op/s
Jan 23 19:06:46 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "bob", "format": "json"}]: dispatch
Jan 23 19:06:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Jan 23 19:06:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 23 19:06:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0)
Jan 23 19:06:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Jan 23 19:06:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Jan 23 19:06:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:46 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "bob", "format": "json"}]: dispatch
Jan 23 19:06:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363
Jan 23 19:06:46 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6/26ffb8a8-1216-4627-b59b-a809cb977363],prefix=session evict} (starting...)
Jan 23 19:06:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:06:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:46 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:06:46.357 155616 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:3d:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:eb:2e:86:1a:4d'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 19:06:46 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:06:46.358 155616 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 19:06:46 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume deauthorize, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:06:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} v 0)
Jan 23 19:06:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} v 0)
Jan 23 19:06:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} : dispatch
Jan 23 19:06:46 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"}]': finished
Jan 23 19:06:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume deauthorize, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:06:46 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume evict, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:06:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1979362240, client_metadata.root=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4
Jan 23 19:06:46 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session evict {filters=[auth_name=tempest-cephx-id-1979362240,client_metadata.root=/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4/4af2fca2-ded3-4f0d-9e2e-b14759090fe4],prefix=session evict} (starting...)
Jan 23 19:06:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 23 19:06:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1979362240, format:json, prefix:fs subvolume evict, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:06:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 110 KiB/s wr, 87 op/s
Jan 23 19:06:46 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "bob", "format": "json"}]: dispatch
Jan 23 19:06:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 23 19:06:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Jan 23 19:06:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Jan 23 19:06:46 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "auth_id": "bob", "format": "json"}]: dispatch
Jan 23 19:06:46 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1979362240", "format": "json"} : dispatch
Jan 23 19:06:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"} : dispatch
Jan 23 19:06:46 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1979362240"}]': finished
Jan 23 19:06:46 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "auth_id": "tempest-cephx-id-1979362240", "format": "json"}]: dispatch
Jan 23 19:06:46 compute-0 ceph-mon[75097]: pgmap v1103: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 110 KiB/s wr, 87 op/s
Jan 23 19:06:46 compute-0 podman[250532]: 2026-01-23 19:06:46.964931078 +0000 UTC m=+0.048765821 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 19:06:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:06:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 110 KiB/s wr, 87 op/s
Jan 23 19:06:48 compute-0 ceph-mon[75097]: pgmap v1104: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 110 KiB/s wr, 87 op/s
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "format": "json"}]: dispatch
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:06:50 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:50.434+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6' of type subvolume
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6' of type subvolume
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "force": true, "format": "json"}]: dispatch
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6'' moved to trashcan
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6, vol_name:cephfs) < ""
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 143 KiB/s wr, 91 op/s
Jan 23 19:06:50 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "format": "json"}]: dispatch
Jan 23 19:06:50 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9ac5a4a7-976b-4e6b-b5a2-f7c4671155d6", "force": true, "format": "json"}]: dispatch
Jan 23 19:06:50 compute-0 ceph-mon[75097]: pgmap v1105: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 143 KiB/s wr, 91 op/s
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "format": "json"}]: dispatch
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:06:50 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:06:50.705+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2d153082-1f6f-49c0-9c45-aa3f03ba31d4' of type subvolume
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2d153082-1f6f-49c0-9c45-aa3f03ba31d4' of type subvolume
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "force": true, "format": "json"}]: dispatch
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2d153082-1f6f-49c0-9c45-aa3f03ba31d4'' moved to trashcan
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:06:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2d153082-1f6f-49c0-9c45-aa3f03ba31d4, vol_name:cephfs) < ""
Jan 23 19:06:51 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "format": "json"}]: dispatch
Jan 23 19:06:51 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2d153082-1f6f-49c0-9c45-aa3f03ba31d4", "force": true, "format": "json"}]: dispatch
Jan 23 19:06:52 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:06:52.360 155616 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d273cd83-f2f8-4b91-b18f-957051ff2a6c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 19:06:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 103 KiB/s wr, 43 op/s
Jan 23 19:06:52 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2372c43f-e00a-4e92-b338-82e4882d6cb1", "format": "json"}]: dispatch
Jan 23 19:06:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2372c43f-e00a-4e92-b338-82e4882d6cb1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:06:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2372c43f-e00a-4e92-b338-82e4882d6cb1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:06:52 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2372c43f-e00a-4e92-b338-82e4882d6cb1", "format": "json"}]: dispatch
Jan 23 19:06:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2372c43f-e00a-4e92-b338-82e4882d6cb1, vol_name:cephfs) < ""
Jan 23 19:06:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2372c43f-e00a-4e92-b338-82e4882d6cb1, vol_name:cephfs) < ""
Jan 23 19:06:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:06:52 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:06:52 compute-0 ceph-mon[75097]: pgmap v1106: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 103 KiB/s wr, 43 op/s
Jan 23 19:06:52 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:06:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:06:53 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2372c43f-e00a-4e92-b338-82e4882d6cb1", "format": "json"}]: dispatch
Jan 23 19:06:53 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2372c43f-e00a-4e92-b338-82e4882d6cb1", "format": "json"}]: dispatch
Jan 23 19:06:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 65 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 124 KiB/s wr, 46 op/s
Jan 23 19:06:55 compute-0 ceph-mon[75097]: pgmap v1107: 305 pgs: 305 active+clean; 65 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 124 KiB/s wr, 46 op/s
Jan 23 19:06:55 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2372c43f-e00a-4e92-b338-82e4882d6cb1", "format": "json"}]: dispatch
Jan 23 19:06:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2372c43f-e00a-4e92-b338-82e4882d6cb1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:06:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2372c43f-e00a-4e92-b338-82e4882d6cb1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:06:55 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2372c43f-e00a-4e92-b338-82e4882d6cb1", "force": true, "format": "json"}]: dispatch
Jan 23 19:06:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2372c43f-e00a-4e92-b338-82e4882d6cb1, vol_name:cephfs) < ""
Jan 23 19:06:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2372c43f-e00a-4e92-b338-82e4882d6cb1'' moved to trashcan
Jan 23 19:06:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:06:55 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2372c43f-e00a-4e92-b338-82e4882d6cb1, vol_name:cephfs) < ""
Jan 23 19:06:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 65 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 53 KiB/s wr, 6 op/s
Jan 23 19:06:56 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2372c43f-e00a-4e92-b338-82e4882d6cb1", "format": "json"}]: dispatch
Jan 23 19:06:56 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2372c43f-e00a-4e92-b338-82e4882d6cb1", "force": true, "format": "json"}]: dispatch
Jan 23 19:06:56 compute-0 ceph-mon[75097]: pgmap v1108: 305 pgs: 305 active+clean; 65 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 53 KiB/s wr, 6 op/s
Jan 23 19:06:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:06:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1971903195' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:06:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:06:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1971903195' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:06:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1971903195' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:06:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1971903195' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:06:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:06:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 65 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 53 KiB/s wr, 6 op/s
Jan 23 19:06:58 compute-0 ceph-mon[75097]: pgmap v1109: 305 pgs: 305 active+clean; 65 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 53 KiB/s wr, 6 op/s
Jan 23 19:06:59 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a974fb39-4b39-4d59-84a7-f8f3ffa56b12", "snap_name": "cdd68b8f-be76-4491-bcdc-990526be471f_91ef0345-6629-4e97-ba21-4bc60ef4b65b", "force": true, "format": "json"}]: dispatch
Jan 23 19:06:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cdd68b8f-be76-4491-bcdc-990526be471f_91ef0345-6629-4e97-ba21-4bc60ef4b65b, sub_name:a974fb39-4b39-4d59-84a7-f8f3ffa56b12, vol_name:cephfs) < ""
Jan 23 19:06:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a974fb39-4b39-4d59-84a7-f8f3ffa56b12/.meta.tmp'
Jan 23 19:06:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a974fb39-4b39-4d59-84a7-f8f3ffa56b12/.meta.tmp' to config b'/volumes/_nogroup/a974fb39-4b39-4d59-84a7-f8f3ffa56b12/.meta'
Jan 23 19:06:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cdd68b8f-be76-4491-bcdc-990526be471f_91ef0345-6629-4e97-ba21-4bc60ef4b65b, sub_name:a974fb39-4b39-4d59-84a7-f8f3ffa56b12, vol_name:cephfs) < ""
Jan 23 19:06:59 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a974fb39-4b39-4d59-84a7-f8f3ffa56b12", "snap_name": "cdd68b8f-be76-4491-bcdc-990526be471f", "force": true, "format": "json"}]: dispatch
Jan 23 19:06:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cdd68b8f-be76-4491-bcdc-990526be471f, sub_name:a974fb39-4b39-4d59-84a7-f8f3ffa56b12, vol_name:cephfs) < ""
Jan 23 19:06:59 compute-0 nova_compute[240119]: 2026-01-23 19:06:59.344 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:06:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a974fb39-4b39-4d59-84a7-f8f3ffa56b12/.meta.tmp'
Jan 23 19:06:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a974fb39-4b39-4d59-84a7-f8f3ffa56b12/.meta.tmp' to config b'/volumes/_nogroup/a974fb39-4b39-4d59-84a7-f8f3ffa56b12/.meta'
Jan 23 19:06:59 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cdd68b8f-be76-4491-bcdc-990526be471f, sub_name:a974fb39-4b39-4d59-84a7-f8f3ffa56b12, vol_name:cephfs) < ""
Jan 23 19:06:59 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a974fb39-4b39-4d59-84a7-f8f3ffa56b12", "snap_name": "cdd68b8f-be76-4491-bcdc-990526be471f_91ef0345-6629-4e97-ba21-4bc60ef4b65b", "force": true, "format": "json"}]: dispatch
Jan 23 19:06:59 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a974fb39-4b39-4d59-84a7-f8f3ffa56b12", "snap_name": "cdd68b8f-be76-4491-bcdc-990526be471f", "force": true, "format": "json"}]: dispatch
Jan 23 19:07:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:07:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:07:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:07:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:07:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:07:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:07:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 67 KiB/s wr, 7 op/s
Jan 23 19:07:00 compute-0 ceph-mon[75097]: pgmap v1110: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 67 KiB/s wr, 7 op/s
Jan 23 19:07:01 compute-0 sudo[250551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:07:01 compute-0 sudo[250551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:07:01 compute-0 sudo[250551]: pam_unix(sudo:session): session closed for user root
Jan 23 19:07:01 compute-0 nova_compute[240119]: 2026-01-23 19:07:01.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:07:01 compute-0 sudo[250576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 19:07:01 compute-0 sudo[250576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:07:01 compute-0 sudo[250576]: pam_unix(sudo:session): session closed for user root
Jan 23 19:07:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 23 19:07:01 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 23 19:07:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:07:01 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:07:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 19:07:01 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:07:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 19:07:01 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:07:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 19:07:01 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:07:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 19:07:02 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:07:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:07:02 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:07:02 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 23 19:07:02 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:07:02 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:07:02 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:07:02 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:07:02 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:07:02 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:07:02 compute-0 sudo[250632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:07:02 compute-0 sudo[250632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:07:02 compute-0 sudo[250632]: pam_unix(sudo:session): session closed for user root
Jan 23 19:07:02 compute-0 sudo[250657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 19:07:02 compute-0 sudo[250657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:07:02 compute-0 nova_compute[240119]: 2026-01-23 19:07:02.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:07:02 compute-0 podman[250693]: 2026-01-23 19:07:02.425624926 +0000 UTC m=+0.057944227 container create d7f91a1267156df8bbe5caf2cf2002d8ede9b867033c93aac2a44d80b6435a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_benz, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 19:07:02 compute-0 systemd[1]: Started libpod-conmon-d7f91a1267156df8bbe5caf2cf2002d8ede9b867033c93aac2a44d80b6435a04.scope.
Jan 23 19:07:02 compute-0 podman[250693]: 2026-01-23 19:07:02.392828085 +0000 UTC m=+0.025147416 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:07:02 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:07:02 compute-0 podman[250693]: 2026-01-23 19:07:02.52406848 +0000 UTC m=+0.156387811 container init d7f91a1267156df8bbe5caf2cf2002d8ede9b867033c93aac2a44d80b6435a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 23 19:07:02 compute-0 podman[250693]: 2026-01-23 19:07:02.531159093 +0000 UTC m=+0.163478394 container start d7f91a1267156df8bbe5caf2cf2002d8ede9b867033c93aac2a44d80b6435a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:07:02 compute-0 stoic_benz[250709]: 167 167
Jan 23 19:07:02 compute-0 systemd[1]: libpod-d7f91a1267156df8bbe5caf2cf2002d8ede9b867033c93aac2a44d80b6435a04.scope: Deactivated successfully.
Jan 23 19:07:02 compute-0 podman[250693]: 2026-01-23 19:07:02.538941114 +0000 UTC m=+0.171260435 container attach d7f91a1267156df8bbe5caf2cf2002d8ede9b867033c93aac2a44d80b6435a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_benz, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:07:02 compute-0 podman[250693]: 2026-01-23 19:07:02.54005019 +0000 UTC m=+0.172369491 container died d7f91a1267156df8bbe5caf2cf2002d8ede9b867033c93aac2a44d80b6435a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:07:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-548bacca7da567e7e860d551418c041ffd71a908b4cab90bf32dd319d88a8bbd-merged.mount: Deactivated successfully.
Jan 23 19:07:02 compute-0 podman[250693]: 2026-01-23 19:07:02.623165211 +0000 UTC m=+0.255484512 container remove d7f91a1267156df8bbe5caf2cf2002d8ede9b867033c93aac2a44d80b6435a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_benz, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:07:02 compute-0 systemd[1]: libpod-conmon-d7f91a1267156df8bbe5caf2cf2002d8ede9b867033c93aac2a44d80b6435a04.scope: Deactivated successfully.
Jan 23 19:07:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 35 KiB/s wr, 4 op/s
Jan 23 19:07:02 compute-0 podman[250735]: 2026-01-23 19:07:02.783797183 +0000 UTC m=+0.044447546 container create 0e12756363bb8d829982bd455517b2de889b9b430b867195ed543a1dce6b619b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_dhawan, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 19:07:02 compute-0 systemd[1]: Started libpod-conmon-0e12756363bb8d829982bd455517b2de889b9b430b867195ed543a1dce6b619b.scope.
Jan 23 19:07:02 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:07:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7763c7b80ff221a808fdcebd038f14865592b1bdd7ba16c97e0fe49bf02199/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:07:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7763c7b80ff221a808fdcebd038f14865592b1bdd7ba16c97e0fe49bf02199/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:07:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7763c7b80ff221a808fdcebd038f14865592b1bdd7ba16c97e0fe49bf02199/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:07:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7763c7b80ff221a808fdcebd038f14865592b1bdd7ba16c97e0fe49bf02199/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:07:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7763c7b80ff221a808fdcebd038f14865592b1bdd7ba16c97e0fe49bf02199/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 19:07:02 compute-0 podman[250735]: 2026-01-23 19:07:02.762171755 +0000 UTC m=+0.022822168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:07:02 compute-0 podman[250735]: 2026-01-23 19:07:02.867662121 +0000 UTC m=+0.128312514 container init 0e12756363bb8d829982bd455517b2de889b9b430b867195ed543a1dce6b619b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_dhawan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:07:02 compute-0 podman[250735]: 2026-01-23 19:07:02.876491928 +0000 UTC m=+0.137142301 container start 0e12756363bb8d829982bd455517b2de889b9b430b867195ed543a1dce6b619b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 19:07:02 compute-0 podman[250735]: 2026-01-23 19:07:02.881815247 +0000 UTC m=+0.142465630 container attach 0e12756363bb8d829982bd455517b2de889b9b430b867195ed543a1dce6b619b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_dhawan, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 19:07:02 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a974fb39-4b39-4d59-84a7-f8f3ffa56b12", "format": "json"}]: dispatch
Jan 23 19:07:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a974fb39-4b39-4d59-84a7-f8f3ffa56b12, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:07:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a974fb39-4b39-4d59-84a7-f8f3ffa56b12, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:07:02 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a974fb39-4b39-4d59-84a7-f8f3ffa56b12' of type subvolume
Jan 23 19:07:02 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:07:02.884+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a974fb39-4b39-4d59-84a7-f8f3ffa56b12' of type subvolume
Jan 23 19:07:02 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a974fb39-4b39-4d59-84a7-f8f3ffa56b12", "force": true, "format": "json"}]: dispatch
Jan 23 19:07:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a974fb39-4b39-4d59-84a7-f8f3ffa56b12, vol_name:cephfs) < ""
Jan 23 19:07:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a974fb39-4b39-4d59-84a7-f8f3ffa56b12'' moved to trashcan
Jan 23 19:07:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:07:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a974fb39-4b39-4d59-84a7-f8f3ffa56b12, vol_name:cephfs) < ""
Jan 23 19:07:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:07:03.047968) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195223048009, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1265, "num_deletes": 256, "total_data_size": 1443599, "memory_usage": 1472704, "flush_reason": "Manual Compaction"}
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195223061137, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1426923, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23730, "largest_seqno": 24994, "table_properties": {"data_size": 1420890, "index_size": 3108, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14823, "raw_average_key_size": 20, "raw_value_size": 1407850, "raw_average_value_size": 1933, "num_data_blocks": 138, "num_entries": 728, "num_filter_entries": 728, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769195156, "oldest_key_time": 1769195156, "file_creation_time": 1769195223, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 13208 microseconds, and 3941 cpu microseconds.
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:07:03.061176) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1426923 bytes OK
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:07:03.061195) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:07:03.062627) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:07:03.062644) EVENT_LOG_v1 {"time_micros": 1769195223062639, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:07:03.062662) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1437284, prev total WAL file size 1437284, number of live WAL files 2.
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:07:03.063403) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1393KB)], [53(8747KB)]
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195223063455, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10384008, "oldest_snapshot_seqno": -1}
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5508 keys, 10284707 bytes, temperature: kUnknown
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195223142213, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 10284707, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10243898, "index_size": 25915, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 136993, "raw_average_key_size": 24, "raw_value_size": 10141142, "raw_average_value_size": 1841, "num_data_blocks": 1083, "num_entries": 5508, "num_filter_entries": 5508, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 1769195223, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:07:03.142411) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 10284707 bytes
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:07:03.146678) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.7 rd, 130.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 8.5 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(14.5) write-amplify(7.2) OK, records in: 6043, records dropped: 535 output_compression: NoCompression
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:07:03.146723) EVENT_LOG_v1 {"time_micros": 1769195223146709, "job": 28, "event": "compaction_finished", "compaction_time_micros": 78822, "compaction_time_cpu_micros": 25650, "output_level": 6, "num_output_files": 1, "total_output_size": 10284707, "num_input_records": 6043, "num_output_records": 5508, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195223147118, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195223148906, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:07:03.063295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:07:03.148973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:07:03.148978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:07:03.148980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:07:03.148981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:07:03 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:07:03.148983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:07:03 compute-0 nova_compute[240119]: 2026-01-23 19:07:03.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:07:03 compute-0 nova_compute[240119]: 2026-01-23 19:07:03.350 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:07:03 compute-0 magical_dhawan[250751]: --> passed data devices: 0 physical, 3 LVM
Jan 23 19:07:03 compute-0 magical_dhawan[250751]: --> All data devices are unavailable
Jan 23 19:07:03 compute-0 nova_compute[240119]: 2026-01-23 19:07:03.380 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:07:03 compute-0 nova_compute[240119]: 2026-01-23 19:07:03.381 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:07:03 compute-0 nova_compute[240119]: 2026-01-23 19:07:03.381 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:07:03 compute-0 nova_compute[240119]: 2026-01-23 19:07:03.381 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:07:03 compute-0 nova_compute[240119]: 2026-01-23 19:07:03.382 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:07:03 compute-0 systemd[1]: libpod-0e12756363bb8d829982bd455517b2de889b9b430b867195ed543a1dce6b619b.scope: Deactivated successfully.
Jan 23 19:07:03 compute-0 podman[250735]: 2026-01-23 19:07:03.386060843 +0000 UTC m=+0.646711226 container died 0e12756363bb8d829982bd455517b2de889b9b430b867195ed543a1dce6b619b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_dhawan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 23 19:07:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a7763c7b80ff221a808fdcebd038f14865592b1bdd7ba16c97e0fe49bf02199-merged.mount: Deactivated successfully.
Jan 23 19:07:03 compute-0 podman[250735]: 2026-01-23 19:07:03.440497282 +0000 UTC m=+0.701147645 container remove 0e12756363bb8d829982bd455517b2de889b9b430b867195ed543a1dce6b619b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:07:03 compute-0 systemd[1]: libpod-conmon-0e12756363bb8d829982bd455517b2de889b9b430b867195ed543a1dce6b619b.scope: Deactivated successfully.
Jan 23 19:07:03 compute-0 sudo[250657]: pam_unix(sudo:session): session closed for user root
Jan 23 19:07:03 compute-0 sudo[250784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:07:03 compute-0 sudo[250784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:07:03 compute-0 sudo[250784]: pam_unix(sudo:session): session closed for user root
Jan 23 19:07:03 compute-0 sudo[250827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 19:07:03 compute-0 sudo[250827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:07:03 compute-0 podman[250865]: 2026-01-23 19:07:03.889853517 +0000 UTC m=+0.040893410 container create 7d589c2404bce05de62e691e2556bc547ddec82a28d511f74c13cd9a253962b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:07:03 compute-0 systemd[1]: Started libpod-conmon-7d589c2404bce05de62e691e2556bc547ddec82a28d511f74c13cd9a253962b4.scope.
Jan 23 19:07:03 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:07:03 compute-0 podman[250865]: 2026-01-23 19:07:03.870293199 +0000 UTC m=+0.021333112 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:07:03 compute-0 podman[250865]: 2026-01-23 19:07:03.978788559 +0000 UTC m=+0.129828472 container init 7d589c2404bce05de62e691e2556bc547ddec82a28d511f74c13cd9a253962b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:07:03 compute-0 podman[250865]: 2026-01-23 19:07:03.985321309 +0000 UTC m=+0.136361202 container start 7d589c2404bce05de62e691e2556bc547ddec82a28d511f74c13cd9a253962b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_roentgen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 19:07:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:07:03 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2533734337' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:07:03 compute-0 nice_roentgen[250880]: 167 167
Jan 23 19:07:03 compute-0 systemd[1]: libpod-7d589c2404bce05de62e691e2556bc547ddec82a28d511f74c13cd9a253962b4.scope: Deactivated successfully.
Jan 23 19:07:03 compute-0 podman[250865]: 2026-01-23 19:07:03.991672634 +0000 UTC m=+0.142712547 container attach 7d589c2404bce05de62e691e2556bc547ddec82a28d511f74c13cd9a253962b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_roentgen, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:07:03 compute-0 podman[250865]: 2026-01-23 19:07:03.993118839 +0000 UTC m=+0.144158762 container died 7d589c2404bce05de62e691e2556bc547ddec82a28d511f74c13cd9a253962b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 19:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0093dfcbcf4b37c098277b56d5ed7735aeca18a1c1cbf06be34d999274626077-merged.mount: Deactivated successfully.
Jan 23 19:07:04 compute-0 nova_compute[240119]: 2026-01-23 19:07:04.021 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:07:04 compute-0 podman[250865]: 2026-01-23 19:07:04.047848045 +0000 UTC m=+0.198887938 container remove 7d589c2404bce05de62e691e2556bc547ddec82a28d511f74c13cd9a253962b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_roentgen, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 19:07:04 compute-0 ceph-mon[75097]: pgmap v1111: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 35 KiB/s wr, 4 op/s
Jan 23 19:07:04 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a974fb39-4b39-4d59-84a7-f8f3ffa56b12", "format": "json"}]: dispatch
Jan 23 19:07:04 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a974fb39-4b39-4d59-84a7-f8f3ffa56b12", "force": true, "format": "json"}]: dispatch
Jan 23 19:07:04 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2533734337' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:07:04 compute-0 systemd[1]: libpod-conmon-7d589c2404bce05de62e691e2556bc547ddec82a28d511f74c13cd9a253962b4.scope: Deactivated successfully.
Jan 23 19:07:04 compute-0 nova_compute[240119]: 2026-01-23 19:07:04.204 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:07:04 compute-0 nova_compute[240119]: 2026-01-23 19:07:04.205 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4994MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:07:04 compute-0 nova_compute[240119]: 2026-01-23 19:07:04.206 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:07:04 compute-0 nova_compute[240119]: 2026-01-23 19:07:04.206 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:07:04 compute-0 podman[250905]: 2026-01-23 19:07:04.230910007 +0000 UTC m=+0.046713132 container create 9dc89bdea88befbe3137807b297466b07ec33b7d24cf3f61f8c3b76e01922aa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_joliot, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 23 19:07:04 compute-0 systemd[1]: Started libpod-conmon-9dc89bdea88befbe3137807b297466b07ec33b7d24cf3f61f8c3b76e01922aa7.scope.
Jan 23 19:07:04 compute-0 nova_compute[240119]: 2026-01-23 19:07:04.273 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:07:04 compute-0 nova_compute[240119]: 2026-01-23 19:07:04.273 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:07:04 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:07:04 compute-0 nova_compute[240119]: 2026-01-23 19:07:04.296 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a33e5558a63a67b0eb339998bfde36d0a1096bf3782a13c6722011a5ff60613/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a33e5558a63a67b0eb339998bfde36d0a1096bf3782a13c6722011a5ff60613/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a33e5558a63a67b0eb339998bfde36d0a1096bf3782a13c6722011a5ff60613/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a33e5558a63a67b0eb339998bfde36d0a1096bf3782a13c6722011a5ff60613/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:07:04 compute-0 podman[250905]: 2026-01-23 19:07:04.213688357 +0000 UTC m=+0.029491502 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:07:04 compute-0 podman[250905]: 2026-01-23 19:07:04.311345701 +0000 UTC m=+0.127148846 container init 9dc89bdea88befbe3137807b297466b07ec33b7d24cf3f61f8c3b76e01922aa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_joliot, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:07:04 compute-0 podman[250905]: 2026-01-23 19:07:04.319667624 +0000 UTC m=+0.135470749 container start 9dc89bdea88befbe3137807b297466b07ec33b7d24cf3f61f8c3b76e01922aa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 19:07:04 compute-0 podman[250905]: 2026-01-23 19:07:04.32601808 +0000 UTC m=+0.141821205 container attach 9dc89bdea88befbe3137807b297466b07ec33b7d24cf3f61f8c3b76e01922aa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_joliot, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:07:04 compute-0 keen_joliot[250921]: {
Jan 23 19:07:04 compute-0 keen_joliot[250921]:     "0": [
Jan 23 19:07:04 compute-0 keen_joliot[250921]:         {
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "devices": [
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "/dev/loop3"
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             ],
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "lv_name": "ceph_lv0",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "lv_size": "21470642176",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "name": "ceph_lv0",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "tags": {
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.cluster_name": "ceph",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.crush_device_class": "",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.encrypted": "0",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.objectstore": "bluestore",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.osd_id": "0",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.type": "block",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.vdo": "0",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.with_tpm": "0"
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             },
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "type": "block",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "vg_name": "ceph_vg0"
Jan 23 19:07:04 compute-0 keen_joliot[250921]:         }
Jan 23 19:07:04 compute-0 keen_joliot[250921]:     ],
Jan 23 19:07:04 compute-0 keen_joliot[250921]:     "1": [
Jan 23 19:07:04 compute-0 keen_joliot[250921]:         {
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "devices": [
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "/dev/loop4"
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             ],
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "lv_name": "ceph_lv1",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "lv_size": "21470642176",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "name": "ceph_lv1",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "tags": {
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.cluster_name": "ceph",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.crush_device_class": "",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.encrypted": "0",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.objectstore": "bluestore",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.osd_id": "1",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.type": "block",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.vdo": "0",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.with_tpm": "0"
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             },
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "type": "block",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "vg_name": "ceph_vg1"
Jan 23 19:07:04 compute-0 keen_joliot[250921]:         }
Jan 23 19:07:04 compute-0 keen_joliot[250921]:     ],
Jan 23 19:07:04 compute-0 keen_joliot[250921]:     "2": [
Jan 23 19:07:04 compute-0 keen_joliot[250921]:         {
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "devices": [
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "/dev/loop5"
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             ],
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "lv_name": "ceph_lv2",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "lv_size": "21470642176",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "name": "ceph_lv2",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "tags": {
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.cluster_name": "ceph",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.crush_device_class": "",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.encrypted": "0",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.objectstore": "bluestore",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.osd_id": "2",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.type": "block",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.vdo": "0",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:                 "ceph.with_tpm": "0"
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             },
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "type": "block",
Jan 23 19:07:04 compute-0 keen_joliot[250921]:             "vg_name": "ceph_vg2"
Jan 23 19:07:04 compute-0 keen_joliot[250921]:         }
Jan 23 19:07:04 compute-0 keen_joliot[250921]:     ]
Jan 23 19:07:04 compute-0 keen_joliot[250921]: }
Jan 23 19:07:04 compute-0 systemd[1]: libpod-9dc89bdea88befbe3137807b297466b07ec33b7d24cf3f61f8c3b76e01922aa7.scope: Deactivated successfully.
Jan 23 19:07:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 52 KiB/s wr, 6 op/s
Jan 23 19:07:04 compute-0 podman[250950]: 2026-01-23 19:07:04.658985982 +0000 UTC m=+0.023133276 container died 9dc89bdea88befbe3137807b297466b07ec33b7d24cf3f61f8c3b76e01922aa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_joliot, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 19:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a33e5558a63a67b0eb339998bfde36d0a1096bf3782a13c6722011a5ff60613-merged.mount: Deactivated successfully.
Jan 23 19:07:04 compute-0 podman[250950]: 2026-01-23 19:07:04.758096223 +0000 UTC m=+0.122243487 container remove 9dc89bdea88befbe3137807b297466b07ec33b7d24cf3f61f8c3b76e01922aa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_joliot, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:07:04 compute-0 systemd[1]: libpod-conmon-9dc89bdea88befbe3137807b297466b07ec33b7d24cf3f61f8c3b76e01922aa7.scope: Deactivated successfully.
Jan 23 19:07:04 compute-0 sudo[250827]: pam_unix(sudo:session): session closed for user root
Jan 23 19:07:04 compute-0 sudo[250965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:07:04 compute-0 sudo[250965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:07:04 compute-0 sudo[250965]: pam_unix(sudo:session): session closed for user root
Jan 23 19:07:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:07:04 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3676274184' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:07:04 compute-0 nova_compute[240119]: 2026-01-23 19:07:04.902 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:07:04 compute-0 nova_compute[240119]: 2026-01-23 19:07:04.908 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:07:04 compute-0 sudo[250991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 19:07:04 compute-0 sudo[250991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:07:04 compute-0 nova_compute[240119]: 2026-01-23 19:07:04.926 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:07:04 compute-0 nova_compute[240119]: 2026-01-23 19:07:04.928 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:07:04 compute-0 nova_compute[240119]: 2026-01-23 19:07:04.928 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:07:05 compute-0 podman[251030]: 2026-01-23 19:07:05.212370928 +0000 UTC m=+0.057489535 container create 1c2240ea1fa582603a0960e67003f2535afcfcd3b862a3425e5a70a457107d11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rosalind, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 23 19:07:05 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3676274184' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:07:05 compute-0 systemd[1]: Started libpod-conmon-1c2240ea1fa582603a0960e67003f2535afcfcd3b862a3425e5a70a457107d11.scope.
Jan 23 19:07:05 compute-0 podman[251030]: 2026-01-23 19:07:05.17724079 +0000 UTC m=+0.022359407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:07:05 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:07:05 compute-0 podman[251030]: 2026-01-23 19:07:05.303986975 +0000 UTC m=+0.149105602 container init 1c2240ea1fa582603a0960e67003f2535afcfcd3b862a3425e5a70a457107d11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:07:05 compute-0 podman[251030]: 2026-01-23 19:07:05.311458918 +0000 UTC m=+0.156577545 container start 1c2240ea1fa582603a0960e67003f2535afcfcd3b862a3425e5a70a457107d11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rosalind, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 19:07:05 compute-0 zen_rosalind[251046]: 167 167
Jan 23 19:07:05 compute-0 podman[251030]: 2026-01-23 19:07:05.316914591 +0000 UTC m=+0.162033208 container attach 1c2240ea1fa582603a0960e67003f2535afcfcd3b862a3425e5a70a457107d11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rosalind, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 19:07:05 compute-0 systemd[1]: libpod-1c2240ea1fa582603a0960e67003f2535afcfcd3b862a3425e5a70a457107d11.scope: Deactivated successfully.
Jan 23 19:07:05 compute-0 podman[251030]: 2026-01-23 19:07:05.318443429 +0000 UTC m=+0.163562036 container died 1c2240ea1fa582603a0960e67003f2535afcfcd3b862a3425e5a70a457107d11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:07:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0068fe5110ed914406515742a0e3f9cab3457d6a48e3cae858fc7ef49d94354-merged.mount: Deactivated successfully.
Jan 23 19:07:05 compute-0 podman[251030]: 2026-01-23 19:07:05.379893669 +0000 UTC m=+0.225012266 container remove 1c2240ea1fa582603a0960e67003f2535afcfcd3b862a3425e5a70a457107d11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rosalind, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:07:05 compute-0 systemd[1]: libpod-conmon-1c2240ea1fa582603a0960e67003f2535afcfcd3b862a3425e5a70a457107d11.scope: Deactivated successfully.
Jan 23 19:07:05 compute-0 podman[251070]: 2026-01-23 19:07:05.549702446 +0000 UTC m=+0.050580146 container create 27a7b303d245f32662459f3dd3e823e68fee327d14068b017790ba68171a829c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_taussig, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Jan 23 19:07:05 compute-0 systemd[1]: Started libpod-conmon-27a7b303d245f32662459f3dd3e823e68fee327d14068b017790ba68171a829c.scope.
Jan 23 19:07:05 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:07:05 compute-0 podman[251070]: 2026-01-23 19:07:05.521518448 +0000 UTC m=+0.022396168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe8b0d37adc37d5cb07b8ffdc35736e51f55ae3cdd126f26fbf535438ca7f85f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe8b0d37adc37d5cb07b8ffdc35736e51f55ae3cdd126f26fbf535438ca7f85f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe8b0d37adc37d5cb07b8ffdc35736e51f55ae3cdd126f26fbf535438ca7f85f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe8b0d37adc37d5cb07b8ffdc35736e51f55ae3cdd126f26fbf535438ca7f85f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:07:05 compute-0 podman[251070]: 2026-01-23 19:07:05.633434032 +0000 UTC m=+0.134311762 container init 27a7b303d245f32662459f3dd3e823e68fee327d14068b017790ba68171a829c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_taussig, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:07:05 compute-0 podman[251070]: 2026-01-23 19:07:05.64030069 +0000 UTC m=+0.141178390 container start 27a7b303d245f32662459f3dd3e823e68fee327d14068b017790ba68171a829c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 19:07:05 compute-0 podman[251070]: 2026-01-23 19:07:05.646516411 +0000 UTC m=+0.147394111 container attach 27a7b303d245f32662459f3dd3e823e68fee327d14068b017790ba68171a829c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 19:07:05 compute-0 nova_compute[240119]: 2026-01-23 19:07:05.926 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:07:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 23 19:07:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 23 19:07:06 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 23 19:07:06 compute-0 ceph-mon[75097]: pgmap v1112: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 52 KiB/s wr, 6 op/s
Jan 23 19:07:06 compute-0 lvm[251166]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:07:06 compute-0 lvm[251164]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:07:06 compute-0 lvm[251166]: VG ceph_vg1 finished
Jan 23 19:07:06 compute-0 lvm[251164]: VG ceph_vg0 finished
Jan 23 19:07:06 compute-0 lvm[251168]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:07:06 compute-0 lvm[251168]: VG ceph_vg2 finished
Jan 23 19:07:06 compute-0 festive_taussig[251086]: {}
Jan 23 19:07:06 compute-0 systemd[1]: libpod-27a7b303d245f32662459f3dd3e823e68fee327d14068b017790ba68171a829c.scope: Deactivated successfully.
Jan 23 19:07:06 compute-0 podman[251070]: 2026-01-23 19:07:06.508095663 +0000 UTC m=+1.008973363 container died 27a7b303d245f32662459f3dd3e823e68fee327d14068b017790ba68171a829c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_taussig, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:07:06 compute-0 systemd[1]: libpod-27a7b303d245f32662459f3dd3e823e68fee327d14068b017790ba68171a829c.scope: Consumed 1.371s CPU time.
Jan 23 19:07:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 37 KiB/s wr, 4 op/s
Jan 23 19:07:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe8b0d37adc37d5cb07b8ffdc35736e51f55ae3cdd126f26fbf535438ca7f85f-merged.mount: Deactivated successfully.
Jan 23 19:07:07 compute-0 podman[251070]: 2026-01-23 19:07:07.196308062 +0000 UTC m=+1.697185762 container remove 27a7b303d245f32662459f3dd3e823e68fee327d14068b017790ba68171a829c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_taussig, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:07:07 compute-0 systemd[1]: libpod-conmon-27a7b303d245f32662459f3dd3e823e68fee327d14068b017790ba68171a829c.scope: Deactivated successfully.
Jan 23 19:07:07 compute-0 sudo[250991]: pam_unix(sudo:session): session closed for user root
Jan 23 19:07:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:07:07 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:07:07 compute-0 nova_compute[240119]: 2026-01-23 19:07:07.350 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:07:07 compute-0 nova_compute[240119]: 2026-01-23 19:07:07.353 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:07:07 compute-0 nova_compute[240119]: 2026-01-23 19:07:07.353 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:07:07 compute-0 nova_compute[240119]: 2026-01-23 19:07:07.368 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:07:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:07:07 compute-0 ceph-mon[75097]: osdmap e145: 3 total, 3 up, 3 in
Jan 23 19:07:07 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:07:07 compute-0 sudo[251185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 19:07:07 compute-0 sudo[251185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:07:07 compute-0 sudo[251185]: pam_unix(sudo:session): session closed for user root
Jan 23 19:07:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:07:08 compute-0 nova_compute[240119]: 2026-01-23 19:07:08.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:07:08 compute-0 nova_compute[240119]: 2026-01-23 19:07:08.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:07:08 compute-0 ceph-mon[75097]: pgmap v1114: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 37 KiB/s wr, 4 op/s
Jan 23 19:07:08 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:07:08 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:07:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 37 KiB/s wr, 4 op/s
Jan 23 19:07:10 compute-0 podman[251210]: 2026-01-23 19:07:10.050472181 +0000 UTC m=+0.130344685 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 19:07:10 compute-0 nova_compute[240119]: 2026-01-23 19:07:10.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:07:10 compute-0 ceph-mon[75097]: pgmap v1115: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 37 KiB/s wr, 4 op/s
Jan 23 19:07:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 30 KiB/s wr, 4 op/s
Jan 23 19:07:12 compute-0 ceph-mon[75097]: pgmap v1116: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 30 KiB/s wr, 4 op/s
Jan 23 19:07:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 30 KiB/s wr, 4 op/s
Jan 23 19:07:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:07:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 23 19:07:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 23 19:07:13 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 23 19:07:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:07:13.587 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:07:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:07:13.587 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:07:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:07:13.587 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:07:14 compute-0 ceph-mon[75097]: pgmap v1117: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 30 KiB/s wr, 4 op/s
Jan 23 19:07:14 compute-0 ceph-mon[75097]: osdmap e146: 3 total, 3 up, 3 in
Jan 23 19:07:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 243 B/s rd, 19 KiB/s wr, 2 op/s
Jan 23 19:07:16 compute-0 ceph-mon[75097]: pgmap v1119: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 243 B/s rd, 19 KiB/s wr, 2 op/s
Jan 23 19:07:16 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c09bd88d-6bf6-440e-b929-87c3d463c101", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:07:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c09bd88d-6bf6-440e-b929-87c3d463c101, vol_name:cephfs) < ""
Jan 23 19:07:16 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c09bd88d-6bf6-440e-b929-87c3d463c101/f9d6d7b8-b95f-4222-aee0-6813a4f2131f'.
Jan 23 19:07:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c09bd88d-6bf6-440e-b929-87c3d463c101/.meta.tmp'
Jan 23 19:07:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c09bd88d-6bf6-440e-b929-87c3d463c101/.meta.tmp' to config b'/volumes/_nogroup/c09bd88d-6bf6-440e-b929-87c3d463c101/.meta'
Jan 23 19:07:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c09bd88d-6bf6-440e-b929-87c3d463c101, vol_name:cephfs) < ""
Jan 23 19:07:16 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c09bd88d-6bf6-440e-b929-87c3d463c101", "format": "json"}]: dispatch
Jan 23 19:07:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c09bd88d-6bf6-440e-b929-87c3d463c101, vol_name:cephfs) < ""
Jan 23 19:07:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c09bd88d-6bf6-440e-b929-87c3d463c101, vol_name:cephfs) < ""
Jan 23 19:07:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:07:16 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:07:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 16 KiB/s wr, 2 op/s
Jan 23 19:07:17 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c09bd88d-6bf6-440e-b929-87c3d463c101", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:07:17 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c09bd88d-6bf6-440e-b929-87c3d463c101", "format": "json"}]: dispatch
Jan 23 19:07:17 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:07:17 compute-0 podman[251237]: 2026-01-23 19:07:17.968622112 +0000 UTC m=+0.050890595 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 19:07:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:07:18 compute-0 ceph-mon[75097]: pgmap v1120: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 16 KiB/s wr, 2 op/s
Jan 23 19:07:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 16 KiB/s wr, 2 op/s
Jan 23 19:07:19 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c09bd88d-6bf6-440e-b929-87c3d463c101", "snap_name": "f7119362-56c6-41a3-a825-295c8385d456", "format": "json"}]: dispatch
Jan 23 19:07:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f7119362-56c6-41a3-a825-295c8385d456, sub_name:c09bd88d-6bf6-440e-b929-87c3d463c101, vol_name:cephfs) < ""
Jan 23 19:07:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f7119362-56c6-41a3-a825-295c8385d456, sub_name:c09bd88d-6bf6-440e-b929-87c3d463c101, vol_name:cephfs) < ""
Jan 23 19:07:20 compute-0 ceph-mon[75097]: pgmap v1121: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 16 KiB/s wr, 2 op/s
Jan 23 19:07:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Jan 23 19:07:21 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c09bd88d-6bf6-440e-b929-87c3d463c101", "snap_name": "f7119362-56c6-41a3-a825-295c8385d456", "format": "json"}]: dispatch
Jan 23 19:07:22 compute-0 ceph-mon[75097]: pgmap v1122: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Jan 23 19:07:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Jan 23 19:07:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:07:23 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "c09bd88d-6bf6-440e-b929-87c3d463c101", "snap_name": "f7119362-56c6-41a3-a825-295c8385d456", "target_sub_name": "c5afb25c-aa10-424c-a2b9-1de8cd95ec12", "format": "json"}]: dispatch
Jan 23 19:07:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:f7119362-56c6-41a3-a825-295c8385d456, sub_name:c09bd88d-6bf6-440e-b929-87c3d463c101, target_sub_name:c5afb25c-aa10-424c-a2b9-1de8cd95ec12, vol_name:cephfs) < ""
Jan 23 19:07:23 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c5afb25c-aa10-424c-a2b9-1de8cd95ec12/cf543da1-dccc-482e-bb6f-381638973d21'.
Jan 23 19:07:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/c5afb25c-aa10-424c-a2b9-1de8cd95ec12/.meta.tmp'
Jan 23 19:07:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c5afb25c-aa10-424c-a2b9-1de8cd95ec12/.meta.tmp' to config b'/volumes/_nogroup/c5afb25c-aa10-424c-a2b9-1de8cd95ec12/.meta'
Jan 23 19:07:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 7d927566-3947-46ff-bf09-d8bebab569f5 for path b'/volumes/_nogroup/c5afb25c-aa10-424c-a2b9-1de8cd95ec12'
Jan 23 19:07:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/c09bd88d-6bf6-440e-b929-87c3d463c101/.meta.tmp'
Jan 23 19:07:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c09bd88d-6bf6-440e-b929-87c3d463c101/.meta.tmp' to config b'/volumes/_nogroup/c09bd88d-6bf6-440e-b929-87c3d463c101/.meta'
Jan 23 19:07:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:07:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Jan 23 19:07:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Jan 23 19:07:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:f7119362-56c6-41a3-a825-295c8385d456, sub_name:c09bd88d-6bf6-440e-b929-87c3d463c101, target_sub_name:c5afb25c-aa10-424c-a2b9-1de8cd95ec12, vol_name:cephfs) < ""
Jan 23 19:07:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/c5afb25c-aa10-424c-a2b9-1de8cd95ec12
Jan 23 19:07:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, c5afb25c-aa10-424c-a2b9-1de8cd95ec12)
Jan 23 19:07:23 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c5afb25c-aa10-424c-a2b9-1de8cd95ec12", "format": "json"}]: dispatch
Jan 23 19:07:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c5afb25c-aa10-424c-a2b9-1de8cd95ec12, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:07:24 compute-0 ceph-mon[75097]: pgmap v1123: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Jan 23 19:07:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s wr, 2 op/s
Jan 23 19:07:25 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "c09bd88d-6bf6-440e-b929-87c3d463c101", "snap_name": "f7119362-56c6-41a3-a825-295c8385d456", "target_sub_name": "c5afb25c-aa10-424c-a2b9-1de8cd95ec12", "format": "json"}]: dispatch
Jan 23 19:07:25 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c5afb25c-aa10-424c-a2b9-1de8cd95ec12", "format": "json"}]: dispatch
Jan 23 19:07:26 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, c5afb25c-aa10-424c-a2b9-1de8cd95ec12) -- by 0 seconds
Jan 23 19:07:26 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/c5afb25c-aa10-424c-a2b9-1de8cd95ec12/.meta.tmp'
Jan 23 19:07:26 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c5afb25c-aa10-424c-a2b9-1de8cd95ec12/.meta.tmp' to config b'/volumes/_nogroup/c5afb25c-aa10-424c-a2b9-1de8cd95ec12/.meta'
Jan 23 19:07:26 compute-0 ceph-mon[75097]: pgmap v1124: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s wr, 2 op/s
Jan 23 19:07:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 2 op/s
Jan 23 19:07:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:07:28 compute-0 ceph-mon[75097]: pgmap v1125: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 2 op/s
Jan 23 19:07:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 2 op/s
Jan 23 19:07:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:07:28
Jan 23 19:07:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:07:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:07:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'backups', 'volumes', 'images', 'vms', '.rgw.root', '.mgr']
Jan 23 19:07:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:07:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:07:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:07:30 compute-0 ceph-mon[75097]: pgmap v1126: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 2 op/s
Jan 23 19:07:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:07:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:07:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:07:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:07:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 66 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 37 KiB/s wr, 4 op/s
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/c09bd88d-6bf6-440e-b929-87c3d463c101/.snap/f7119362-56c6-41a3-a825-295c8385d456/f9d6d7b8-b95f-4222-aee0-6813a4f2131f' to b'/volumes/_nogroup/c5afb25c-aa10-424c-a2b9-1de8cd95ec12/cf543da1-dccc-482e-bb6f-381638973d21'
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c5afb25c-aa10-424c-a2b9-1de8cd95ec12, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [progress INFO root] update: starting ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/c5afb25c-aa10-424c-a2b9-1de8cd95ec12/.meta.tmp'
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c5afb25c-aa10-424c-a2b9-1de8cd95ec12/.meta.tmp' to config b'/volumes/_nogroup/c5afb25c-aa10-424c-a2b9-1de8cd95ec12/.meta'
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.clone_index] untracking 7d927566-3947-46ff-bf09-d8bebab569f5
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c09bd88d-6bf6-440e-b929-87c3d463c101/.meta.tmp'
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c09bd88d-6bf6-440e-b929-87c3d463c101/.meta.tmp' to config b'/volumes/_nogroup/c09bd88d-6bf6-440e-b929-87c3d463c101/.meta'
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/c5afb25c-aa10-424c-a2b9-1de8cd95ec12/.meta.tmp'
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c5afb25c-aa10-424c-a2b9-1de8cd95ec12/.meta.tmp' to config b'/volumes/_nogroup/c5afb25c-aa10-424c-a2b9-1de8cd95ec12/.meta'
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, c5afb25c-aa10-424c-a2b9-1de8cd95ec12)
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ca22b681-9cb5-4a64-81d1-5d9133599b53", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:ca22b681-9cb5-4a64-81d1-5d9133599b53, vol_name:cephfs) < ""
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ca22b681-9cb5-4a64-81d1-5d9133599b53/b730fee6-f526-440c-a62b-fbd5a1b98ada'.
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ca22b681-9cb5-4a64-81d1-5d9133599b53/.meta.tmp'
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ca22b681-9cb5-4a64-81d1-5d9133599b53/.meta.tmp' to config b'/volumes/_nogroup/ca22b681-9cb5-4a64-81d1-5d9133599b53/.meta'
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:ca22b681-9cb5-4a64-81d1-5d9133599b53, vol_name:cephfs) < ""
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ca22b681-9cb5-4a64-81d1-5d9133599b53", "format": "json"}]: dispatch
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ca22b681-9cb5-4a64-81d1-5d9133599b53, vol_name:cephfs) < ""
Jan 23 19:07:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ca22b681-9cb5-4a64-81d1-5d9133599b53, vol_name:cephfs) < ""
Jan 23 19:07:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:07:31 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:07:31 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:07:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Jan 23 19:07:32 compute-0 ceph-mgr[75390]: [progress INFO root] complete: finished ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Jan 23 19:07:32 compute-0 ceph-mgr[75390]: [progress INFO root] Completed event mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%) in 1 seconds
Jan 23 19:07:32 compute-0 ceph-mgr[75390]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Jan 23 19:07:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Jan 23 19:07:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7f06b9d6fb50>
Jan 23 19:07:32 compute-0 ceph-mon[75097]: pgmap v1127: 305 pgs: 305 active+clean; 66 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 37 KiB/s wr, 4 op/s
Jan 23 19:07:32 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ca22b681-9cb5-4a64-81d1-5d9133599b53", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:07:32 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ca22b681-9cb5-4a64-81d1-5d9133599b53", "format": "json"}]: dispatch
Jan 23 19:07:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 66 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 30 KiB/s wr, 3 op/s
Jan 23 19:07:32 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ca22b681-9cb5-4a64-81d1-5d9133599b53", "snap_name": "35716c41-5b72-46b0-b1fd-367179c86a72", "format": "json"}]: dispatch
Jan 23 19:07:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:35716c41-5b72-46b0-b1fd-367179c86a72, sub_name:ca22b681-9cb5-4a64-81d1-5d9133599b53, vol_name:cephfs) < ""
Jan 23 19:07:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:35716c41-5b72-46b0-b1fd-367179c86a72, sub_name:ca22b681-9cb5-4a64-81d1-5d9133599b53, vol_name:cephfs) < ""
Jan 23 19:07:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:07:33 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ca22b681-9cb5-4a64-81d1-5d9133599b53", "snap_name": "35716c41-5b72-46b0-b1fd-367179c86a72_ea5fca0e-7166-47ce-8ff0-ef57fbc5ca8a", "force": true, "format": "json"}]: dispatch
Jan 23 19:07:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:35716c41-5b72-46b0-b1fd-367179c86a72_ea5fca0e-7166-47ce-8ff0-ef57fbc5ca8a, sub_name:ca22b681-9cb5-4a64-81d1-5d9133599b53, vol_name:cephfs) < ""
Jan 23 19:07:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ca22b681-9cb5-4a64-81d1-5d9133599b53/.meta.tmp'
Jan 23 19:07:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ca22b681-9cb5-4a64-81d1-5d9133599b53/.meta.tmp' to config b'/volumes/_nogroup/ca22b681-9cb5-4a64-81d1-5d9133599b53/.meta'
Jan 23 19:07:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:35716c41-5b72-46b0-b1fd-367179c86a72_ea5fca0e-7166-47ce-8ff0-ef57fbc5ca8a, sub_name:ca22b681-9cb5-4a64-81d1-5d9133599b53, vol_name:cephfs) < ""
Jan 23 19:07:33 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ca22b681-9cb5-4a64-81d1-5d9133599b53", "snap_name": "35716c41-5b72-46b0-b1fd-367179c86a72", "force": true, "format": "json"}]: dispatch
Jan 23 19:07:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:35716c41-5b72-46b0-b1fd-367179c86a72, sub_name:ca22b681-9cb5-4a64-81d1-5d9133599b53, vol_name:cephfs) < ""
Jan 23 19:07:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ca22b681-9cb5-4a64-81d1-5d9133599b53/.meta.tmp'
Jan 23 19:07:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ca22b681-9cb5-4a64-81d1-5d9133599b53/.meta.tmp' to config b'/volumes/_nogroup/ca22b681-9cb5-4a64-81d1-5d9133599b53/.meta'
Jan 23 19:07:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:35716c41-5b72-46b0-b1fd-367179c86a72, sub_name:ca22b681-9cb5-4a64-81d1-5d9133599b53, vol_name:cephfs) < ""
Jan 23 19:07:34 compute-0 ceph-mon[75097]: pgmap v1128: 305 pgs: 305 active+clean; 66 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 30 KiB/s wr, 3 op/s
Jan 23 19:07:34 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ca22b681-9cb5-4a64-81d1-5d9133599b53", "snap_name": "35716c41-5b72-46b0-b1fd-367179c86a72", "format": "json"}]: dispatch
Jan 23 19:07:34 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ca22b681-9cb5-4a64-81d1-5d9133599b53", "snap_name": "35716c41-5b72-46b0-b1fd-367179c86a72_ea5fca0e-7166-47ce-8ff0-ef57fbc5ca8a", "force": true, "format": "json"}]: dispatch
Jan 23 19:07:34 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ca22b681-9cb5-4a64-81d1-5d9133599b53", "snap_name": "35716c41-5b72-46b0-b1fd-367179c86a72", "force": true, "format": "json"}]: dispatch
Jan 23 19:07:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 55 KiB/s wr, 7 op/s
Jan 23 19:07:36 compute-0 ceph-mgr[75390]: [progress INFO root] Writing back 18 completed events
Jan 23 19:07:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 19:07:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:07:36 compute-0 ceph-mon[75097]: pgmap v1129: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 55 KiB/s wr, 7 op/s
Jan 23 19:07:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 40 KiB/s wr, 5 op/s
Jan 23 19:07:36 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ca22b681-9cb5-4a64-81d1-5d9133599b53", "format": "json"}]: dispatch
Jan 23 19:07:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ca22b681-9cb5-4a64-81d1-5d9133599b53, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:07:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ca22b681-9cb5-4a64-81d1-5d9133599b53, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:07:36 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:07:36.948+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ca22b681-9cb5-4a64-81d1-5d9133599b53' of type subvolume
Jan 23 19:07:36 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ca22b681-9cb5-4a64-81d1-5d9133599b53' of type subvolume
Jan 23 19:07:36 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ca22b681-9cb5-4a64-81d1-5d9133599b53", "force": true, "format": "json"}]: dispatch
Jan 23 19:07:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ca22b681-9cb5-4a64-81d1-5d9133599b53, vol_name:cephfs) < ""
Jan 23 19:07:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ca22b681-9cb5-4a64-81d1-5d9133599b53'' moved to trashcan
Jan 23 19:07:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:07:36 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ca22b681-9cb5-4a64-81d1-5d9133599b53, vol_name:cephfs) < ""
Jan 23 19:07:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:07:38 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:07:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 40 KiB/s wr, 5 op/s
Jan 23 19:07:39 compute-0 ceph-mon[75097]: pgmap v1130: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 40 KiB/s wr, 5 op/s
Jan 23 19:07:39 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ca22b681-9cb5-4a64-81d1-5d9133599b53", "format": "json"}]: dispatch
Jan 23 19:07:39 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ca22b681-9cb5-4a64-81d1-5d9133599b53", "force": true, "format": "json"}]: dispatch
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006658994896132289 of space, bias 1.0, pg target 0.19976984688396868 quantized to 32 (current 32)
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0004024368218319284 of space, bias 4.0, pg target 0.48292418619831406 quantized to 16 (current 16)
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:07:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 59 KiB/s wr, 7 op/s
Jan 23 19:07:40 compute-0 ceph-mon[75097]: pgmap v1131: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 40 KiB/s wr, 5 op/s
Jan 23 19:07:40 compute-0 podman[251258]: 2026-01-23 19:07:40.998109362 +0000 UTC m=+0.087015026 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 19:07:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 23 19:07:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 23 19:07:41 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 23 19:07:41 compute-0 ceph-mon[75097]: pgmap v1132: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 59 KiB/s wr, 7 op/s
Jan 23 19:07:42 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f0a550da-da13-43b9-818c-b94c83664b8c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:07:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f0a550da-da13-43b9-818c-b94c83664b8c, vol_name:cephfs) < ""
Jan 23 19:07:42 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f0a550da-da13-43b9-818c-b94c83664b8c/f9891615-1e88-4a61-a7db-0b03b2a7dd06'.
Jan 23 19:07:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f0a550da-da13-43b9-818c-b94c83664b8c/.meta.tmp'
Jan 23 19:07:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f0a550da-da13-43b9-818c-b94c83664b8c/.meta.tmp' to config b'/volumes/_nogroup/f0a550da-da13-43b9-818c-b94c83664b8c/.meta'
Jan 23 19:07:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f0a550da-da13-43b9-818c-b94c83664b8c, vol_name:cephfs) < ""
Jan 23 19:07:42 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f0a550da-da13-43b9-818c-b94c83664b8c", "format": "json"}]: dispatch
Jan 23 19:07:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f0a550da-da13-43b9-818c-b94c83664b8c, vol_name:cephfs) < ""
Jan 23 19:07:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f0a550da-da13-43b9-818c-b94c83664b8c, vol_name:cephfs) < ""
Jan 23 19:07:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:07:42 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:07:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 53 KiB/s wr, 6 op/s
Jan 23 19:07:43 compute-0 ceph-mon[75097]: osdmap e147: 3 total, 3 up, 3 in
Jan 23 19:07:43 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f0a550da-da13-43b9-818c-b94c83664b8c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:07:43 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f0a550da-da13-43b9-818c-b94c83664b8c", "format": "json"}]: dispatch
Jan 23 19:07:43 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:07:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:07:44 compute-0 ceph-mon[75097]: pgmap v1134: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 53 KiB/s wr, 6 op/s
Jan 23 19:07:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 36 KiB/s wr, 4 op/s
Jan 23 19:07:45 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f0a550da-da13-43b9-818c-b94c83664b8c", "snap_name": "81df423a-af39-4393-a966-b0937dc73485", "format": "json"}]: dispatch
Jan 23 19:07:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:81df423a-af39-4393-a966-b0937dc73485, sub_name:f0a550da-da13-43b9-818c-b94c83664b8c, vol_name:cephfs) < ""
Jan 23 19:07:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:81df423a-af39-4393-a966-b0937dc73485, sub_name:f0a550da-da13-43b9-818c-b94c83664b8c, vol_name:cephfs) < ""
Jan 23 19:07:46 compute-0 ceph-mon[75097]: pgmap v1135: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 36 KiB/s wr, 4 op/s
Jan 23 19:07:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 36 KiB/s wr, 4 op/s
Jan 23 19:07:47 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f0a550da-da13-43b9-818c-b94c83664b8c", "snap_name": "81df423a-af39-4393-a966-b0937dc73485", "format": "json"}]: dispatch
Jan 23 19:07:48 compute-0 ceph-mon[75097]: pgmap v1136: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 36 KiB/s wr, 4 op/s
Jan 23 19:07:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:07:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Jan 23 19:07:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Jan 23 19:07:48 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Jan 23 19:07:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 16 KiB/s wr, 2 op/s
Jan 23 19:07:48 compute-0 podman[251286]: 2026-01-23 19:07:48.9624786 +0000 UTC m=+0.042239673 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 23 19:07:49 compute-0 ceph-mon[75097]: osdmap e148: 3 total, 3 up, 3 in
Jan 23 19:07:49 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a1da1abe-af26-445a-9262-0ff686ee8203", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:07:49 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a1da1abe-af26-445a-9262-0ff686ee8203, vol_name:cephfs) < ""
Jan 23 19:07:50 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a1da1abe-af26-445a-9262-0ff686ee8203/caefab48-56b5-452f-a972-3e8fed4bff9b'.
Jan 23 19:07:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a1da1abe-af26-445a-9262-0ff686ee8203/.meta.tmp'
Jan 23 19:07:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a1da1abe-af26-445a-9262-0ff686ee8203/.meta.tmp' to config b'/volumes/_nogroup/a1da1abe-af26-445a-9262-0ff686ee8203/.meta'
Jan 23 19:07:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a1da1abe-af26-445a-9262-0ff686ee8203, vol_name:cephfs) < ""
Jan 23 19:07:50 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a1da1abe-af26-445a-9262-0ff686ee8203", "format": "json"}]: dispatch
Jan 23 19:07:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a1da1abe-af26-445a-9262-0ff686ee8203, vol_name:cephfs) < ""
Jan 23 19:07:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a1da1abe-af26-445a-9262-0ff686ee8203, vol_name:cephfs) < ""
Jan 23 19:07:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:07:50 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:07:50 compute-0 ceph-mon[75097]: pgmap v1138: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 16 KiB/s wr, 2 op/s
Jan 23 19:07:50 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:07:50 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f0a550da-da13-43b9-818c-b94c83664b8c", "snap_name": "81df423a-af39-4393-a966-b0937dc73485_21864cd1-392f-4e6e-bb83-b7db38eeb626", "force": true, "format": "json"}]: dispatch
Jan 23 19:07:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:81df423a-af39-4393-a966-b0937dc73485_21864cd1-392f-4e6e-bb83-b7db38eeb626, sub_name:f0a550da-da13-43b9-818c-b94c83664b8c, vol_name:cephfs) < ""
Jan 23 19:07:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f0a550da-da13-43b9-818c-b94c83664b8c/.meta.tmp'
Jan 23 19:07:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f0a550da-da13-43b9-818c-b94c83664b8c/.meta.tmp' to config b'/volumes/_nogroup/f0a550da-da13-43b9-818c-b94c83664b8c/.meta'
Jan 23 19:07:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:81df423a-af39-4393-a966-b0937dc73485_21864cd1-392f-4e6e-bb83-b7db38eeb626, sub_name:f0a550da-da13-43b9-818c-b94c83664b8c, vol_name:cephfs) < ""
Jan 23 19:07:50 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f0a550da-da13-43b9-818c-b94c83664b8c", "snap_name": "81df423a-af39-4393-a966-b0937dc73485", "force": true, "format": "json"}]: dispatch
Jan 23 19:07:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:81df423a-af39-4393-a966-b0937dc73485, sub_name:f0a550da-da13-43b9-818c-b94c83664b8c, vol_name:cephfs) < ""
Jan 23 19:07:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f0a550da-da13-43b9-818c-b94c83664b8c/.meta.tmp'
Jan 23 19:07:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f0a550da-da13-43b9-818c-b94c83664b8c/.meta.tmp' to config b'/volumes/_nogroup/f0a550da-da13-43b9-818c-b94c83664b8c/.meta'
Jan 23 19:07:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:81df423a-af39-4393-a966-b0937dc73485, sub_name:f0a550da-da13-43b9-818c-b94c83664b8c, vol_name:cephfs) < ""
Jan 23 19:07:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 234 B/s rd, 25 KiB/s wr, 3 op/s
Jan 23 19:07:51 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a1da1abe-af26-445a-9262-0ff686ee8203", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:07:51 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a1da1abe-af26-445a-9262-0ff686ee8203", "format": "json"}]: dispatch
Jan 23 19:07:51 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f0a550da-da13-43b9-818c-b94c83664b8c", "snap_name": "81df423a-af39-4393-a966-b0937dc73485_21864cd1-392f-4e6e-bb83-b7db38eeb626", "force": true, "format": "json"}]: dispatch
Jan 23 19:07:51 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f0a550da-da13-43b9-818c-b94c83664b8c", "snap_name": "81df423a-af39-4393-a966-b0937dc73485", "force": true, "format": "json"}]: dispatch
Jan 23 19:07:52 compute-0 ceph-mon[75097]: pgmap v1139: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 234 B/s rd, 25 KiB/s wr, 3 op/s
Jan 23 19:07:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 22 KiB/s wr, 2 op/s
Jan 23 19:07:52 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c5afb25c-aa10-424c-a2b9-1de8cd95ec12", "format": "json"}]: dispatch
Jan 23 19:07:52 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c5afb25c-aa10-424c-a2b9-1de8cd95ec12, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:07:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:07:54 compute-0 ceph-mon[75097]: pgmap v1140: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 22 KiB/s wr, 2 op/s
Jan 23 19:07:54 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c5afb25c-aa10-424c-a2b9-1de8cd95ec12", "format": "json"}]: dispatch
Jan 23 19:07:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 37 KiB/s wr, 4 op/s
Jan 23 19:07:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c5afb25c-aa10-424c-a2b9-1de8cd95ec12, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:07:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c5afb25c-aa10-424c-a2b9-1de8cd95ec12", "format": "json"}]: dispatch
Jan 23 19:07:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c5afb25c-aa10-424c-a2b9-1de8cd95ec12, vol_name:cephfs) < ""
Jan 23 19:07:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c5afb25c-aa10-424c-a2b9-1de8cd95ec12, vol_name:cephfs) < ""
Jan 23 19:07:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:07:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:07:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a1da1abe-af26-445a-9262-0ff686ee8203", "snap_name": "c72bee12-7f03-4342-a0eb-04d39fa0a5a4", "format": "json"}]: dispatch
Jan 23 19:07:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c72bee12-7f03-4342-a0eb-04d39fa0a5a4, sub_name:a1da1abe-af26-445a-9262-0ff686ee8203, vol_name:cephfs) < ""
Jan 23 19:07:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c72bee12-7f03-4342-a0eb-04d39fa0a5a4, sub_name:a1da1abe-af26-445a-9262-0ff686ee8203, vol_name:cephfs) < ""
Jan 23 19:07:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f0a550da-da13-43b9-818c-b94c83664b8c", "format": "json"}]: dispatch
Jan 23 19:07:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f0a550da-da13-43b9-818c-b94c83664b8c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:07:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f0a550da-da13-43b9-818c-b94c83664b8c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:07:56 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:07:56.302+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f0a550da-da13-43b9-818c-b94c83664b8c' of type subvolume
Jan 23 19:07:56 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f0a550da-da13-43b9-818c-b94c83664b8c' of type subvolume
Jan 23 19:07:56 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f0a550da-da13-43b9-818c-b94c83664b8c", "force": true, "format": "json"}]: dispatch
Jan 23 19:07:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f0a550da-da13-43b9-818c-b94c83664b8c, vol_name:cephfs) < ""
Jan 23 19:07:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f0a550da-da13-43b9-818c-b94c83664b8c'' moved to trashcan
Jan 23 19:07:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:07:56 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f0a550da-da13-43b9-818c-b94c83664b8c, vol_name:cephfs) < ""
Jan 23 19:07:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Jan 23 19:07:56 compute-0 ceph-mon[75097]: pgmap v1141: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 37 KiB/s wr, 4 op/s
Jan 23 19:07:56 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:07:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Jan 23 19:07:56 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Jan 23 19:07:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 244 B/s rd, 45 KiB/s wr, 4 op/s
Jan 23 19:07:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:07:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2369271537' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:07:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:07:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2369271537' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:07:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c5afb25c-aa10-424c-a2b9-1de8cd95ec12", "format": "json"}]: dispatch
Jan 23 19:07:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a1da1abe-af26-445a-9262-0ff686ee8203", "snap_name": "c72bee12-7f03-4342-a0eb-04d39fa0a5a4", "format": "json"}]: dispatch
Jan 23 19:07:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f0a550da-da13-43b9-818c-b94c83664b8c", "format": "json"}]: dispatch
Jan 23 19:07:57 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f0a550da-da13-43b9-818c-b94c83664b8c", "force": true, "format": "json"}]: dispatch
Jan 23 19:07:57 compute-0 ceph-mon[75097]: osdmap e149: 3 total, 3 up, 3 in
Jan 23 19:07:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/2369271537' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:07:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/2369271537' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:07:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:07:58 compute-0 ceph-mon[75097]: pgmap v1143: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 244 B/s rd, 45 KiB/s wr, 4 op/s
Jan 23 19:07:58 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:07:58.528 155616 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:3d:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:eb:2e:86:1a:4d'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 19:07:58 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:07:58.530 155616 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cb820b50-42e3-42ba-9e37-9bcbf6f06c8c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cb820b50-42e3-42ba-9e37-9bcbf6f06c8c, vol_name:cephfs) < ""
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/cb820b50-42e3-42ba-9e37-9bcbf6f06c8c/8edecfc0-efcd-409f-9b95-38cd77cf76c1'.
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cb820b50-42e3-42ba-9e37-9bcbf6f06c8c/.meta.tmp'
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cb820b50-42e3-42ba-9e37-9bcbf6f06c8c/.meta.tmp' to config b'/volumes/_nogroup/cb820b50-42e3-42ba-9e37-9bcbf6f06c8c/.meta'
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cb820b50-42e3-42ba-9e37-9bcbf6f06c8c, vol_name:cephfs) < ""
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cb820b50-42e3-42ba-9e37-9bcbf6f06c8c", "format": "json"}]: dispatch
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cb820b50-42e3-42ba-9e37-9bcbf6f06c8c, vol_name:cephfs) < ""
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cb820b50-42e3-42ba-9e37-9bcbf6f06c8c, vol_name:cephfs) < ""
Jan 23 19:07:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:07:58 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 37 KiB/s wr, 4 op/s
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f56806ef-3edf-4541-b599-18f3a7b7c7c4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f56806ef-3edf-4541-b599-18f3a7b7c7c4, vol_name:cephfs) < ""
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f56806ef-3edf-4541-b599-18f3a7b7c7c4/2431a9b3-a5b4-4eec-ad8e-ef75ef7b1c87'.
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f56806ef-3edf-4541-b599-18f3a7b7c7c4/.meta.tmp'
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f56806ef-3edf-4541-b599-18f3a7b7c7c4/.meta.tmp' to config b'/volumes/_nogroup/f56806ef-3edf-4541-b599-18f3a7b7c7c4/.meta'
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f56806ef-3edf-4541-b599-18f3a7b7c7c4, vol_name:cephfs) < ""
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f56806ef-3edf-4541-b599-18f3a7b7c7c4", "format": "json"}]: dispatch
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f56806ef-3edf-4541-b599-18f3a7b7c7c4, vol_name:cephfs) < ""
Jan 23 19:07:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f56806ef-3edf-4541-b599-18f3a7b7c7c4, vol_name:cephfs) < ""
Jan 23 19:07:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:07:58 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:07:59 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cb820b50-42e3-42ba-9e37-9bcbf6f06c8c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:07:59 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cb820b50-42e3-42ba-9e37-9bcbf6f06c8c", "format": "json"}]: dispatch
Jan 23 19:07:59 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:07:59 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:08:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:08:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:08:00 compute-0 ceph-mon[75097]: pgmap v1144: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 37 KiB/s wr, 4 op/s
Jan 23 19:08:00 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f56806ef-3edf-4541-b599-18f3a7b7c7c4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:08:00 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f56806ef-3edf-4541-b599-18f3a7b7c7c4", "format": "json"}]: dispatch
Jan 23 19:08:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:08:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:08:00 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:08:00.532 155616 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d273cd83-f2f8-4b91-b18f-957051ff2a6c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 19:08:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:08:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:08:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 58 KiB/s wr, 6 op/s
Jan 23 19:08:01 compute-0 nova_compute[240119]: 2026-01-23 19:08:01.343 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:08:02 compute-0 nova_compute[240119]: 2026-01-23 19:08:02.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:08:02 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "f56806ef-3edf-4541-b599-18f3a7b7c7c4", "new_size": 2147483648, "format": "json"}]: dispatch
Jan 23 19:08:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:f56806ef-3edf-4541-b599-18f3a7b7c7c4, vol_name:cephfs) < ""
Jan 23 19:08:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:f56806ef-3edf-4541-b599-18f3a7b7c7c4, vol_name:cephfs) < ""
Jan 23 19:08:02 compute-0 ceph-mon[75097]: pgmap v1145: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 58 KiB/s wr, 6 op/s
Jan 23 19:08:02 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cb820b50-42e3-42ba-9e37-9bcbf6f06c8c", "format": "json"}]: dispatch
Jan 23 19:08:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cb820b50-42e3-42ba-9e37-9bcbf6f06c8c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cb820b50-42e3-42ba-9e37-9bcbf6f06c8c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:02 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:08:02.502+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cb820b50-42e3-42ba-9e37-9bcbf6f06c8c' of type subvolume
Jan 23 19:08:02 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cb820b50-42e3-42ba-9e37-9bcbf6f06c8c' of type subvolume
Jan 23 19:08:02 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cb820b50-42e3-42ba-9e37-9bcbf6f06c8c", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cb820b50-42e3-42ba-9e37-9bcbf6f06c8c, vol_name:cephfs) < ""
Jan 23 19:08:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cb820b50-42e3-42ba-9e37-9bcbf6f06c8c'' moved to trashcan
Jan 23 19:08:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:08:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cb820b50-42e3-42ba-9e37-9bcbf6f06c8c, vol_name:cephfs) < ""
Jan 23 19:08:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 58 KiB/s wr, 6 op/s
Jan 23 19:08:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:08:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Jan 23 19:08:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Jan 23 19:08:03 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Jan 23 19:08:03 compute-0 nova_compute[240119]: 2026-01-23 19:08:03.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:08:03 compute-0 nova_compute[240119]: 2026-01-23 19:08:03.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:08:03 compute-0 nova_compute[240119]: 2026-01-23 19:08:03.379 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:08:03 compute-0 nova_compute[240119]: 2026-01-23 19:08:03.380 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:08:03 compute-0 nova_compute[240119]: 2026-01-23 19:08:03.380 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:08:03 compute-0 nova_compute[240119]: 2026-01-23 19:08:03.380 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:08:03 compute-0 nova_compute[240119]: 2026-01-23 19:08:03.380 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:08:03 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "f56806ef-3edf-4541-b599-18f3a7b7c7c4", "new_size": 2147483648, "format": "json"}]: dispatch
Jan 23 19:08:03 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cb820b50-42e3-42ba-9e37-9bcbf6f06c8c", "format": "json"}]: dispatch
Jan 23 19:08:03 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cb820b50-42e3-42ba-9e37-9bcbf6f06c8c", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:03 compute-0 ceph-mon[75097]: osdmap e150: 3 total, 3 up, 3 in
Jan 23 19:08:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:08:03 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/646843136' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:08:03 compute-0 nova_compute[240119]: 2026-01-23 19:08:03.919 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:08:04 compute-0 nova_compute[240119]: 2026-01-23 19:08:04.089 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:08:04 compute-0 nova_compute[240119]: 2026-01-23 19:08:04.091 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5061MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:08:04 compute-0 nova_compute[240119]: 2026-01-23 19:08:04.091 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:08:04 compute-0 nova_compute[240119]: 2026-01-23 19:08:04.091 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:08:04 compute-0 nova_compute[240119]: 2026-01-23 19:08:04.172 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:08:04 compute-0 nova_compute[240119]: 2026-01-23 19:08:04.173 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:08:04 compute-0 nova_compute[240119]: 2026-01-23 19:08:04.200 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:08:04 compute-0 ceph-mon[75097]: pgmap v1146: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 58 KiB/s wr, 6 op/s
Jan 23 19:08:04 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/646843136' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:08:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 369 B/s rd, 77 KiB/s wr, 6 op/s
Jan 23 19:08:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:08:04 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/503064369' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:08:04 compute-0 nova_compute[240119]: 2026-01-23 19:08:04.739 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:08:04 compute-0 nova_compute[240119]: 2026-01-23 19:08:04.745 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:08:04 compute-0 nova_compute[240119]: 2026-01-23 19:08:04.761 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:08:04 compute-0 nova_compute[240119]: 2026-01-23 19:08:04.763 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:08:04 compute-0 nova_compute[240119]: 2026-01-23 19:08:04.764 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:08:05 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:08:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:05 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/503064369' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:08:05 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/3c0741de-9039-4fe3-b2d7-809655c235a1'.
Jan 23 19:08:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp'
Jan 23 19:08:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp' to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta'
Jan 23 19:08:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:05 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "format": "json"}]: dispatch
Jan 23 19:08:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:08:05 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:08:05 compute-0 nova_compute[240119]: 2026-01-23 19:08:05.763 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:08:05 compute-0 nova_compute[240119]: 2026-01-23 19:08:05.764 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:08:05 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f56806ef-3edf-4541-b599-18f3a7b7c7c4", "format": "json"}]: dispatch
Jan 23 19:08:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f56806ef-3edf-4541-b599-18f3a7b7c7c4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f56806ef-3edf-4541-b599-18f3a7b7c7c4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:05 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:08:05.991+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f56806ef-3edf-4541-b599-18f3a7b7c7c4' of type subvolume
Jan 23 19:08:05 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f56806ef-3edf-4541-b599-18f3a7b7c7c4' of type subvolume
Jan 23 19:08:05 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f56806ef-3edf-4541-b599-18f3a7b7c7c4", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:05 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f56806ef-3edf-4541-b599-18f3a7b7c7c4, vol_name:cephfs) < ""
Jan 23 19:08:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f56806ef-3edf-4541-b599-18f3a7b7c7c4'' moved to trashcan
Jan 23 19:08:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:08:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f56806ef-3edf-4541-b599-18f3a7b7c7c4, vol_name:cephfs) < ""
Jan 23 19:08:06 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "897e454b-4a6d-4c3a-bde6-222bf698dc84", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:08:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:897e454b-4a6d-4c3a-bde6-222bf698dc84, vol_name:cephfs) < ""
Jan 23 19:08:06 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/897e454b-4a6d-4c3a-bde6-222bf698dc84/30e45005-9d16-408a-8fd0-e7f60a03fde4'.
Jan 23 19:08:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/897e454b-4a6d-4c3a-bde6-222bf698dc84/.meta.tmp'
Jan 23 19:08:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/897e454b-4a6d-4c3a-bde6-222bf698dc84/.meta.tmp' to config b'/volumes/_nogroup/897e454b-4a6d-4c3a-bde6-222bf698dc84/.meta'
Jan 23 19:08:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:897e454b-4a6d-4c3a-bde6-222bf698dc84, vol_name:cephfs) < ""
Jan 23 19:08:06 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "897e454b-4a6d-4c3a-bde6-222bf698dc84", "format": "json"}]: dispatch
Jan 23 19:08:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:897e454b-4a6d-4c3a-bde6-222bf698dc84, vol_name:cephfs) < ""
Jan 23 19:08:06 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:897e454b-4a6d-4c3a-bde6-222bf698dc84, vol_name:cephfs) < ""
Jan 23 19:08:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:08:06 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:08:06 compute-0 ceph-mon[75097]: pgmap v1148: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 369 B/s rd, 77 KiB/s wr, 6 op/s
Jan 23 19:08:06 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:08:06 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "format": "json"}]: dispatch
Jan 23 19:08:06 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:08:06 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:08:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 64 KiB/s wr, 5 op/s
Jan 23 19:08:07 compute-0 sudo[251351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:08:07 compute-0 sudo[251351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:08:07 compute-0 sudo[251351]: pam_unix(sudo:session): session closed for user root
Jan 23 19:08:07 compute-0 sudo[251376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 23 19:08:07 compute-0 sudo[251376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:08:07 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f56806ef-3edf-4541-b599-18f3a7b7c7c4", "format": "json"}]: dispatch
Jan 23 19:08:07 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f56806ef-3edf-4541-b599-18f3a7b7c7c4", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:07 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "897e454b-4a6d-4c3a-bde6-222bf698dc84", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:08:07 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "897e454b-4a6d-4c3a-bde6-222bf698dc84", "format": "json"}]: dispatch
Jan 23 19:08:08 compute-0 podman[251445]: 2026-01-23 19:08:08.300169203 +0000 UTC m=+0.280004539 container exec 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 23 19:08:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:08:08 compute-0 nova_compute[240119]: 2026-01-23 19:08:08.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:08:08 compute-0 nova_compute[240119]: 2026-01-23 19:08:08.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:08:08 compute-0 podman[251445]: 2026-01-23 19:08:08.425668278 +0000 UTC m=+0.405503614 container exec_died 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Jan 23 19:08:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 64 KiB/s wr, 5 op/s
Jan 23 19:08:08 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "aea27463-e745-4cdf-b933-f34265646329", "format": "json"}]: dispatch
Jan 23 19:08:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aea27463-e745-4cdf-b933-f34265646329, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:08 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aea27463-e745-4cdf-b933-f34265646329, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:08 compute-0 ceph-mon[75097]: pgmap v1149: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 64 KiB/s wr, 5 op/s
Jan 23 19:08:09 compute-0 sudo[251376]: pam_unix(sudo:session): session closed for user root
Jan 23 19:08:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:08:09 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:08:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:08:09 compute-0 nova_compute[240119]: 2026-01-23 19:08:09.344 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:08:09 compute-0 nova_compute[240119]: 2026-01-23 19:08:09.363 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:08:09 compute-0 nova_compute[240119]: 2026-01-23 19:08:09.363 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:08:09 compute-0 nova_compute[240119]: 2026-01-23 19:08:09.363 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:08:09 compute-0 nova_compute[240119]: 2026-01-23 19:08:09.379 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:08:09 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:08:09 compute-0 sudo[251631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:08:09 compute-0 sudo[251631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:08:09 compute-0 sudo[251631]: pam_unix(sudo:session): session closed for user root
Jan 23 19:08:09 compute-0 sudo[251656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 19:08:09 compute-0 sudo[251656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:08:09 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "897e454b-4a6d-4c3a-bde6-222bf698dc84", "format": "json"}]: dispatch
Jan 23 19:08:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:897e454b-4a6d-4c3a-bde6-222bf698dc84, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:897e454b-4a6d-4c3a-bde6-222bf698dc84, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:09 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:08:09.774+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '897e454b-4a6d-4c3a-bde6-222bf698dc84' of type subvolume
Jan 23 19:08:09 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '897e454b-4a6d-4c3a-bde6-222bf698dc84' of type subvolume
Jan 23 19:08:09 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "897e454b-4a6d-4c3a-bde6-222bf698dc84", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:897e454b-4a6d-4c3a-bde6-222bf698dc84, vol_name:cephfs) < ""
Jan 23 19:08:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/897e454b-4a6d-4c3a-bde6-222bf698dc84'' moved to trashcan
Jan 23 19:08:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:08:09 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:897e454b-4a6d-4c3a-bde6-222bf698dc84, vol_name:cephfs) < ""
Jan 23 19:08:10 compute-0 sudo[251656]: pam_unix(sudo:session): session closed for user root
Jan 23 19:08:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:08:10 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:08:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 19:08:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:08:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 19:08:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:08:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 19:08:10 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:08:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 19:08:10 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:08:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:08:10 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:08:10 compute-0 ceph-mon[75097]: pgmap v1150: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 64 KiB/s wr, 5 op/s
Jan 23 19:08:10 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "aea27463-e745-4cdf-b933-f34265646329", "format": "json"}]: dispatch
Jan 23 19:08:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:08:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:08:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:08:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:08:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:08:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:08:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:08:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:08:10 compute-0 nova_compute[240119]: 2026-01-23 19:08:10.347 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:08:10 compute-0 sudo[251712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:08:10 compute-0 sudo[251712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:08:10 compute-0 sudo[251712]: pam_unix(sudo:session): session closed for user root
Jan 23 19:08:10 compute-0 sudo[251737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 19:08:10 compute-0 sudo[251737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:08:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 76 KiB/s wr, 6 op/s
Jan 23 19:08:10 compute-0 podman[251774]: 2026-01-23 19:08:10.740086879 +0000 UTC m=+0.026125790 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:08:11 compute-0 podman[251774]: 2026-01-23 19:08:11.457724883 +0000 UTC m=+0.743763764 container create 2d9e64f121ebc3b183b7a0460c69be05902375fc30cb9e8b856d114dcabbc5a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_grothendieck, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 23 19:08:11 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "897e454b-4a6d-4c3a-bde6-222bf698dc84", "format": "json"}]: dispatch
Jan 23 19:08:11 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "897e454b-4a6d-4c3a-bde6-222bf698dc84", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:11 compute-0 systemd[1]: Started libpod-conmon-2d9e64f121ebc3b183b7a0460c69be05902375fc30cb9e8b856d114dcabbc5a9.scope.
Jan 23 19:08:11 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:08:11 compute-0 podman[251774]: 2026-01-23 19:08:11.601125926 +0000 UTC m=+0.887164817 container init 2d9e64f121ebc3b183b7a0460c69be05902375fc30cb9e8b856d114dcabbc5a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_grothendieck, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:08:11 compute-0 podman[251774]: 2026-01-23 19:08:11.60990032 +0000 UTC m=+0.895939201 container start 2d9e64f121ebc3b183b7a0460c69be05902375fc30cb9e8b856d114dcabbc5a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_grothendieck, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:08:11 compute-0 podman[251788]: 2026-01-23 19:08:11.60991204 +0000 UTC m=+0.112913568 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 23 19:08:11 compute-0 podman[251774]: 2026-01-23 19:08:11.614276157 +0000 UTC m=+0.900315038 container attach 2d9e64f121ebc3b183b7a0460c69be05902375fc30cb9e8b856d114dcabbc5a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 23 19:08:11 compute-0 objective_grothendieck[251808]: 167 167
Jan 23 19:08:11 compute-0 systemd[1]: libpod-2d9e64f121ebc3b183b7a0460c69be05902375fc30cb9e8b856d114dcabbc5a9.scope: Deactivated successfully.
Jan 23 19:08:11 compute-0 podman[251774]: 2026-01-23 19:08:11.617526286 +0000 UTC m=+0.903565167 container died 2d9e64f121ebc3b183b7a0460c69be05902375fc30cb9e8b856d114dcabbc5a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_grothendieck, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 19:08:12 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "05d8d6a4-f40b-47a0-9f79-f4489052a0d6", "format": "json"}]: dispatch
Jan 23 19:08:12 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:05d8d6a4-f40b-47a0-9f79-f4489052a0d6, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5782aa911bfd641404105ac98fc5664fa85d0368a656866d054197447fce29e-merged.mount: Deactivated successfully.
Jan 23 19:08:12 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:05d8d6a4-f40b-47a0-9f79-f4489052a0d6, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:12 compute-0 podman[251774]: 2026-01-23 19:08:12.330044656 +0000 UTC m=+1.616083537 container remove 2d9e64f121ebc3b183b7a0460c69be05902375fc30cb9e8b856d114dcabbc5a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:08:12 compute-0 systemd[1]: libpod-conmon-2d9e64f121ebc3b183b7a0460c69be05902375fc30cb9e8b856d114dcabbc5a9.scope: Deactivated successfully.
Jan 23 19:08:12 compute-0 podman[251842]: 2026-01-23 19:08:12.516647284 +0000 UTC m=+0.062526599 container create 3ffc28472d072d0020357088f5a02a2cde80016ad453fc946ae1d3830d052360 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mcnulty, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 19:08:12 compute-0 ceph-mon[75097]: pgmap v1151: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 76 KiB/s wr, 6 op/s
Jan 23 19:08:12 compute-0 systemd[1]: Started libpod-conmon-3ffc28472d072d0020357088f5a02a2cde80016ad453fc946ae1d3830d052360.scope.
Jan 23 19:08:12 compute-0 podman[251842]: 2026-01-23 19:08:12.477358804 +0000 UTC m=+0.023238149 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:08:12 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edcb311bfee708bc692437e339dbd79ec9f57543cbab24b4e021fef5134ac4ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edcb311bfee708bc692437e339dbd79ec9f57543cbab24b4e021fef5134ac4ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edcb311bfee708bc692437e339dbd79ec9f57543cbab24b4e021fef5134ac4ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edcb311bfee708bc692437e339dbd79ec9f57543cbab24b4e021fef5134ac4ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edcb311bfee708bc692437e339dbd79ec9f57543cbab24b4e021fef5134ac4ac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 19:08:12 compute-0 podman[251842]: 2026-01-23 19:08:12.620667184 +0000 UTC m=+0.166546529 container init 3ffc28472d072d0020357088f5a02a2cde80016ad453fc946ae1d3830d052360 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Jan 23 19:08:12 compute-0 podman[251842]: 2026-01-23 19:08:12.628263979 +0000 UTC m=+0.174143294 container start 3ffc28472d072d0020357088f5a02a2cde80016ad453fc946ae1d3830d052360 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mcnulty, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:08:12 compute-0 podman[251842]: 2026-01-23 19:08:12.66475775 +0000 UTC m=+0.210637095 container attach 3ffc28472d072d0020357088f5a02a2cde80016ad453fc946ae1d3830d052360 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:08:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 76 KiB/s wr, 6 op/s
Jan 23 19:08:13 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9a2901e0-1e2a-431b-ad0b-0539c4b54da0", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:08:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:9a2901e0-1e2a-431b-ad0b-0539c4b54da0, vol_name:cephfs) < ""
Jan 23 19:08:13 compute-0 compassionate_mcnulty[251859]: --> passed data devices: 0 physical, 3 LVM
Jan 23 19:08:13 compute-0 compassionate_mcnulty[251859]: --> All data devices are unavailable
Jan 23 19:08:13 compute-0 systemd[1]: libpod-3ffc28472d072d0020357088f5a02a2cde80016ad453fc946ae1d3830d052360.scope: Deactivated successfully.
Jan 23 19:08:13 compute-0 podman[251842]: 2026-01-23 19:08:13.105056633 +0000 UTC m=+0.650935968 container died 3ffc28472d072d0020357088f5a02a2cde80016ad453fc946ae1d3830d052360 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:08:13 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/9a2901e0-1e2a-431b-ad0b-0539c4b54da0/95a77749-507c-4b3a-b63b-6b2b655ed135'.
Jan 23 19:08:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-edcb311bfee708bc692437e339dbd79ec9f57543cbab24b4e021fef5134ac4ac-merged.mount: Deactivated successfully.
Jan 23 19:08:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9a2901e0-1e2a-431b-ad0b-0539c4b54da0/.meta.tmp'
Jan 23 19:08:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9a2901e0-1e2a-431b-ad0b-0539c4b54da0/.meta.tmp' to config b'/volumes/_nogroup/9a2901e0-1e2a-431b-ad0b-0539c4b54da0/.meta'
Jan 23 19:08:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:9a2901e0-1e2a-431b-ad0b-0539c4b54da0, vol_name:cephfs) < ""
Jan 23 19:08:13 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9a2901e0-1e2a-431b-ad0b-0539c4b54da0", "format": "json"}]: dispatch
Jan 23 19:08:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9a2901e0-1e2a-431b-ad0b-0539c4b54da0, vol_name:cephfs) < ""
Jan 23 19:08:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9a2901e0-1e2a-431b-ad0b-0539c4b54da0, vol_name:cephfs) < ""
Jan 23 19:08:13 compute-0 podman[251842]: 2026-01-23 19:08:13.22653575 +0000 UTC m=+0.772415065 container remove 3ffc28472d072d0020357088f5a02a2cde80016ad453fc946ae1d3830d052360 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mcnulty, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 23 19:08:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:08:13 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:08:13 compute-0 systemd[1]: libpod-conmon-3ffc28472d072d0020357088f5a02a2cde80016ad453fc946ae1d3830d052360.scope: Deactivated successfully.
Jan 23 19:08:13 compute-0 sudo[251737]: pam_unix(sudo:session): session closed for user root
Jan 23 19:08:13 compute-0 sudo[251894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:08:13 compute-0 sudo[251894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:08:13 compute-0 sudo[251894]: pam_unix(sudo:session): session closed for user root
Jan 23 19:08:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:08:13 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3732f0e1-a134-4643-95be-27f3fc23eec2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:08:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3732f0e1-a134-4643-95be-27f3fc23eec2, vol_name:cephfs) < ""
Jan 23 19:08:13 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/3732f0e1-a134-4643-95be-27f3fc23eec2/c203658f-cf43-4de7-b814-374f653abba1'.
Jan 23 19:08:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3732f0e1-a134-4643-95be-27f3fc23eec2/.meta.tmp'
Jan 23 19:08:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3732f0e1-a134-4643-95be-27f3fc23eec2/.meta.tmp' to config b'/volumes/_nogroup/3732f0e1-a134-4643-95be-27f3fc23eec2/.meta'
Jan 23 19:08:13 compute-0 sudo[251919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 19:08:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3732f0e1-a134-4643-95be-27f3fc23eec2, vol_name:cephfs) < ""
Jan 23 19:08:13 compute-0 sudo[251919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:08:13 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3732f0e1-a134-4643-95be-27f3fc23eec2", "format": "json"}]: dispatch
Jan 23 19:08:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3732f0e1-a134-4643-95be-27f3fc23eec2, vol_name:cephfs) < ""
Jan 23 19:08:13 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3732f0e1-a134-4643-95be-27f3fc23eec2, vol_name:cephfs) < ""
Jan 23 19:08:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:08:13 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:08:13 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "05d8d6a4-f40b-47a0-9f79-f4489052a0d6", "format": "json"}]: dispatch
Jan 23 19:08:13 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:08:13 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:08:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:08:13.588 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:08:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:08:13.589 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:08:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:08:13.590 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:08:13 compute-0 podman[251955]: 2026-01-23 19:08:13.717749505 +0000 UTC m=+0.053197490 container create 11bfa3eaa29d2e9bfcda194d92d863203479b0812eadc73ee75f5619913791ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_yalow, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 23 19:08:13 compute-0 systemd[1]: Started libpod-conmon-11bfa3eaa29d2e9bfcda194d92d863203479b0812eadc73ee75f5619913791ea.scope.
Jan 23 19:08:13 compute-0 podman[251955]: 2026-01-23 19:08:13.685803546 +0000 UTC m=+0.021251551 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:08:13 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:08:13 compute-0 podman[251955]: 2026-01-23 19:08:13.843031675 +0000 UTC m=+0.178479670 container init 11bfa3eaa29d2e9bfcda194d92d863203479b0812eadc73ee75f5619913791ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 23 19:08:13 compute-0 podman[251955]: 2026-01-23 19:08:13.851519152 +0000 UTC m=+0.186967127 container start 11bfa3eaa29d2e9bfcda194d92d863203479b0812eadc73ee75f5619913791ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 23 19:08:13 compute-0 competent_yalow[251970]: 167 167
Jan 23 19:08:13 compute-0 systemd[1]: libpod-11bfa3eaa29d2e9bfcda194d92d863203479b0812eadc73ee75f5619913791ea.scope: Deactivated successfully.
Jan 23 19:08:13 compute-0 podman[251955]: 2026-01-23 19:08:13.856899774 +0000 UTC m=+0.192347779 container attach 11bfa3eaa29d2e9bfcda194d92d863203479b0812eadc73ee75f5619913791ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:08:13 compute-0 podman[251955]: 2026-01-23 19:08:13.857219761 +0000 UTC m=+0.192667736 container died 11bfa3eaa29d2e9bfcda194d92d863203479b0812eadc73ee75f5619913791ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:08:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee5e7d539124de691289f619c0e2d05d00942e2831c07a1c65d949b40d04fa5e-merged.mount: Deactivated successfully.
Jan 23 19:08:13 compute-0 podman[251955]: 2026-01-23 19:08:13.980973963 +0000 UTC m=+0.316421938 container remove 11bfa3eaa29d2e9bfcda194d92d863203479b0812eadc73ee75f5619913791ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_yalow, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:08:14 compute-0 systemd[1]: libpod-conmon-11bfa3eaa29d2e9bfcda194d92d863203479b0812eadc73ee75f5619913791ea.scope: Deactivated successfully.
Jan 23 19:08:14 compute-0 podman[251993]: 2026-01-23 19:08:14.152641526 +0000 UTC m=+0.045776329 container create f491c45e94f3182c6420ecaa77bf094eae853de24529960199e6e3c175419ac3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mestorf, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 23 19:08:14 compute-0 podman[251993]: 2026-01-23 19:08:14.131861438 +0000 UTC m=+0.024996251 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:08:14 compute-0 systemd[1]: Started libpod-conmon-f491c45e94f3182c6420ecaa77bf094eae853de24529960199e6e3c175419ac3.scope.
Jan 23 19:08:14 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c01a631c1077accfc6adace2bfc1b7c30cd2e6b7f91af2990037e29af9cb5f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c01a631c1077accfc6adace2bfc1b7c30cd2e6b7f91af2990037e29af9cb5f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c01a631c1077accfc6adace2bfc1b7c30cd2e6b7f91af2990037e29af9cb5f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c01a631c1077accfc6adace2bfc1b7c30cd2e6b7f91af2990037e29af9cb5f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:08:14 compute-0 podman[251993]: 2026-01-23 19:08:14.379992707 +0000 UTC m=+0.273127530 container init f491c45e94f3182c6420ecaa77bf094eae853de24529960199e6e3c175419ac3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 23 19:08:14 compute-0 podman[251993]: 2026-01-23 19:08:14.388116696 +0000 UTC m=+0.281251489 container start f491c45e94f3182c6420ecaa77bf094eae853de24529960199e6e3c175419ac3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mestorf, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 23 19:08:14 compute-0 podman[251993]: 2026-01-23 19:08:14.392576965 +0000 UTC m=+0.285711778 container attach f491c45e94f3182c6420ecaa77bf094eae853de24529960199e6e3c175419ac3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 23 19:08:14 compute-0 ceph-mon[75097]: pgmap v1152: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 76 KiB/s wr, 6 op/s
Jan 23 19:08:14 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9a2901e0-1e2a-431b-ad0b-0539c4b54da0", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:08:14 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9a2901e0-1e2a-431b-ad0b-0539c4b54da0", "format": "json"}]: dispatch
Jan 23 19:08:14 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3732f0e1-a134-4643-95be-27f3fc23eec2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:08:14 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3732f0e1-a134-4643-95be-27f3fc23eec2", "format": "json"}]: dispatch
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]: {
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:     "0": [
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:         {
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "devices": [
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "/dev/loop3"
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             ],
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "lv_name": "ceph_lv0",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "lv_size": "21470642176",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "name": "ceph_lv0",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "tags": {
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.cluster_name": "ceph",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.crush_device_class": "",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.encrypted": "0",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.objectstore": "bluestore",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.osd_id": "0",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.type": "block",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.vdo": "0",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.with_tpm": "0"
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             },
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "type": "block",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "vg_name": "ceph_vg0"
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:         }
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:     ],
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:     "1": [
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:         {
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "devices": [
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "/dev/loop4"
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             ],
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "lv_name": "ceph_lv1",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "lv_size": "21470642176",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "name": "ceph_lv1",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "tags": {
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.cluster_name": "ceph",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.crush_device_class": "",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.encrypted": "0",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.objectstore": "bluestore",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.osd_id": "1",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.type": "block",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.vdo": "0",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.with_tpm": "0"
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             },
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "type": "block",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "vg_name": "ceph_vg1"
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:         }
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:     ],
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:     "2": [
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:         {
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "devices": [
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "/dev/loop5"
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             ],
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "lv_name": "ceph_lv2",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "lv_size": "21470642176",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "name": "ceph_lv2",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "tags": {
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.cluster_name": "ceph",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.crush_device_class": "",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.encrypted": "0",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.objectstore": "bluestore",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.osd_id": "2",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.type": "block",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.vdo": "0",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:                 "ceph.with_tpm": "0"
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             },
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "type": "block",
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:             "vg_name": "ceph_vg2"
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:         }
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]:     ]
Jan 23 19:08:14 compute-0 peaceful_mestorf[252010]: }
Jan 23 19:08:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 542 B/s rd, 107 KiB/s wr, 9 op/s
Jan 23 19:08:14 compute-0 systemd[1]: libpod-f491c45e94f3182c6420ecaa77bf094eae853de24529960199e6e3c175419ac3.scope: Deactivated successfully.
Jan 23 19:08:14 compute-0 podman[251993]: 2026-01-23 19:08:14.689074686 +0000 UTC m=+0.582209479 container died f491c45e94f3182c6420ecaa77bf094eae853de24529960199e6e3c175419ac3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 19:08:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c01a631c1077accfc6adace2bfc1b7c30cd2e6b7f91af2990037e29af9cb5f0-merged.mount: Deactivated successfully.
Jan 23 19:08:14 compute-0 podman[251993]: 2026-01-23 19:08:14.861940488 +0000 UTC m=+0.755075281 container remove f491c45e94f3182c6420ecaa77bf094eae853de24529960199e6e3c175419ac3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mestorf, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 23 19:08:14 compute-0 systemd[1]: libpod-conmon-f491c45e94f3182c6420ecaa77bf094eae853de24529960199e6e3c175419ac3.scope: Deactivated successfully.
Jan 23 19:08:14 compute-0 sudo[251919]: pam_unix(sudo:session): session closed for user root
Jan 23 19:08:14 compute-0 sudo[252031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:08:14 compute-0 sudo[252031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:08:14 compute-0 sudo[252031]: pam_unix(sudo:session): session closed for user root
Jan 23 19:08:15 compute-0 sudo[252056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 19:08:15 compute-0 sudo[252056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:08:15 compute-0 podman[252092]: 2026-01-23 19:08:15.303262785 +0000 UTC m=+0.025358760 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:08:15 compute-0 podman[252092]: 2026-01-23 19:08:15.462038732 +0000 UTC m=+0.184134687 container create 00cf7781280e6525c4c584b46be8353e7b9a925e9afe561fb8e662f1c292f5f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ishizaka, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 23 19:08:15 compute-0 systemd[1]: Started libpod-conmon-00cf7781280e6525c4c584b46be8353e7b9a925e9afe561fb8e662f1c292f5f5.scope.
Jan 23 19:08:15 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:08:15 compute-0 podman[252092]: 2026-01-23 19:08:15.680817975 +0000 UTC m=+0.402913950 container init 00cf7781280e6525c4c584b46be8353e7b9a925e9afe561fb8e662f1c292f5f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ishizaka, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:08:15 compute-0 podman[252092]: 2026-01-23 19:08:15.687688592 +0000 UTC m=+0.409784547 container start 00cf7781280e6525c4c584b46be8353e7b9a925e9afe561fb8e662f1c292f5f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ishizaka, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 23 19:08:15 compute-0 amazing_ishizaka[252108]: 167 167
Jan 23 19:08:15 compute-0 systemd[1]: libpod-00cf7781280e6525c4c584b46be8353e7b9a925e9afe561fb8e662f1c292f5f5.scope: Deactivated successfully.
Jan 23 19:08:15 compute-0 podman[252092]: 2026-01-23 19:08:15.693232048 +0000 UTC m=+0.415328023 container attach 00cf7781280e6525c4c584b46be8353e7b9a925e9afe561fb8e662f1c292f5f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ishizaka, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:08:15 compute-0 podman[252092]: 2026-01-23 19:08:15.694334505 +0000 UTC m=+0.416430460 container died 00cf7781280e6525c4c584b46be8353e7b9a925e9afe561fb8e662f1c292f5f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ishizaka, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:08:15 compute-0 ceph-mon[75097]: pgmap v1153: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 542 B/s rd, 107 KiB/s wr, 9 op/s
Jan 23 19:08:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-33b99ea8d194967ec0fe9dc320c8e856149c8e1942f51713b1687ebd7acb8a37-merged.mount: Deactivated successfully.
Jan 23 19:08:15 compute-0 podman[252092]: 2026-01-23 19:08:15.80635226 +0000 UTC m=+0.528448215 container remove 00cf7781280e6525c4c584b46be8353e7b9a925e9afe561fb8e662f1c292f5f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 23 19:08:15 compute-0 systemd[1]: libpod-conmon-00cf7781280e6525c4c584b46be8353e7b9a925e9afe561fb8e662f1c292f5f5.scope: Deactivated successfully.
Jan 23 19:08:16 compute-0 podman[252131]: 2026-01-23 19:08:15.947874106 +0000 UTC m=+0.023819972 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:08:16 compute-0 podman[252131]: 2026-01-23 19:08:16.116117494 +0000 UTC m=+0.192063340 container create b948a62fed5f67a3a4d96f799c8899c27a1a6d2d9a56dc3d01a34f725ea75c5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_rubin, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 23 19:08:16 compute-0 systemd[1]: Started libpod-conmon-b948a62fed5f67a3a4d96f799c8899c27a1a6d2d9a56dc3d01a34f725ea75c5f.scope.
Jan 23 19:08:16 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947fad0648ef94133b047e98b15c008fd508be0b9bcf0f2cf775e38326261c38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947fad0648ef94133b047e98b15c008fd508be0b9bcf0f2cf775e38326261c38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947fad0648ef94133b047e98b15c008fd508be0b9bcf0f2cf775e38326261c38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947fad0648ef94133b047e98b15c008fd508be0b9bcf0f2cf775e38326261c38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:08:16 compute-0 podman[252131]: 2026-01-23 19:08:16.19778273 +0000 UTC m=+0.273728606 container init b948a62fed5f67a3a4d96f799c8899c27a1a6d2d9a56dc3d01a34f725ea75c5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_rubin, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 19:08:16 compute-0 podman[252131]: 2026-01-23 19:08:16.204488204 +0000 UTC m=+0.280434050 container start b948a62fed5f67a3a4d96f799c8899c27a1a6d2d9a56dc3d01a34f725ea75c5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 23 19:08:16 compute-0 podman[252131]: 2026-01-23 19:08:16.21295029 +0000 UTC m=+0.288896146 container attach b948a62fed5f67a3a4d96f799c8899c27a1a6d2d9a56dc3d01a34f725ea75c5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_rubin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "05d8d6a4-f40b-47a0-9f79-f4489052a0d6_ff3bf6c7-50df-4ff3-b5c1-61682211af1e", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:05d8d6a4-f40b-47a0-9f79-f4489052a0d6_ff3bf6c7-50df-4ff3-b5c1-61682211af1e, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp'
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp' to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta'
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:05d8d6a4-f40b-47a0-9f79-f4489052a0d6_ff3bf6c7-50df-4ff3-b5c1-61682211af1e, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "05d8d6a4-f40b-47a0-9f79-f4489052a0d6", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:05d8d6a4-f40b-47a0-9f79-f4489052a0d6, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp'
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp' to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta'
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:05d8d6a4-f40b-47a0-9f79-f4489052a0d6, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "9a2901e0-1e2a-431b-ad0b-0539c4b54da0", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:9a2901e0-1e2a-431b-ad0b-0539c4b54da0, vol_name:cephfs) < ""
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:9a2901e0-1e2a-431b-ad0b-0539c4b54da0, vol_name:cephfs) < ""
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 72 KiB/s wr, 6 op/s
Jan 23 19:08:16 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "05d8d6a4-f40b-47a0-9f79-f4489052a0d6_ff3bf6c7-50df-4ff3-b5c1-61682211af1e", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:16 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "05d8d6a4-f40b-47a0-9f79-f4489052a0d6", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:16 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "9a2901e0-1e2a-431b-ad0b-0539c4b54da0", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3732f0e1-a134-4643-95be-27f3fc23eec2", "format": "json"}]: dispatch
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3732f0e1-a134-4643-95be-27f3fc23eec2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3732f0e1-a134-4643-95be-27f3fc23eec2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3732f0e1-a134-4643-95be-27f3fc23eec2' of type subvolume
Jan 23 19:08:16 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:08:16.844+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3732f0e1-a134-4643-95be-27f3fc23eec2' of type subvolume
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3732f0e1-a134-4643-95be-27f3fc23eec2", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3732f0e1-a134-4643-95be-27f3fc23eec2, vol_name:cephfs) < ""
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3732f0e1-a134-4643-95be-27f3fc23eec2'' moved to trashcan
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:08:16 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3732f0e1-a134-4643-95be-27f3fc23eec2, vol_name:cephfs) < ""
Jan 23 19:08:16 compute-0 lvm[252223]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:08:16 compute-0 lvm[252223]: VG ceph_vg0 finished
Jan 23 19:08:16 compute-0 lvm[252226]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:08:16 compute-0 lvm[252226]: VG ceph_vg1 finished
Jan 23 19:08:16 compute-0 lvm[252228]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:08:16 compute-0 lvm[252228]: VG ceph_vg2 finished
Jan 23 19:08:16 compute-0 laughing_rubin[252147]: {}
Jan 23 19:08:17 compute-0 systemd[1]: libpod-b948a62fed5f67a3a4d96f799c8899c27a1a6d2d9a56dc3d01a34f725ea75c5f.scope: Deactivated successfully.
Jan 23 19:08:17 compute-0 podman[252131]: 2026-01-23 19:08:17.024061308 +0000 UTC m=+1.100007154 container died b948a62fed5f67a3a4d96f799c8899c27a1a6d2d9a56dc3d01a34f725ea75c5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 19:08:17 compute-0 systemd[1]: libpod-b948a62fed5f67a3a4d96f799c8899c27a1a6d2d9a56dc3d01a34f725ea75c5f.scope: Consumed 1.317s CPU time.
Jan 23 19:08:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-947fad0648ef94133b047e98b15c008fd508be0b9bcf0f2cf775e38326261c38-merged.mount: Deactivated successfully.
Jan 23 19:08:17 compute-0 podman[252131]: 2026-01-23 19:08:17.098937416 +0000 UTC m=+1.174883252 container remove b948a62fed5f67a3a4d96f799c8899c27a1a6d2d9a56dc3d01a34f725ea75c5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 19:08:17 compute-0 systemd[1]: libpod-conmon-b948a62fed5f67a3a4d96f799c8899c27a1a6d2d9a56dc3d01a34f725ea75c5f.scope: Deactivated successfully.
Jan 23 19:08:17 compute-0 sudo[252056]: pam_unix(sudo:session): session closed for user root
Jan 23 19:08:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:08:17 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:08:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:08:17 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:08:17 compute-0 sudo[252244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 19:08:17 compute-0 sudo[252244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:08:17 compute-0 sudo[252244]: pam_unix(sudo:session): session closed for user root
Jan 23 19:08:18 compute-0 ceph-mon[75097]: pgmap v1154: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 72 KiB/s wr, 6 op/s
Jan 23 19:08:18 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3732f0e1-a134-4643-95be-27f3fc23eec2", "format": "json"}]: dispatch
Jan 23 19:08:18 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3732f0e1-a134-4643-95be-27f3fc23eec2", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:08:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:08:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:08:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 72 KiB/s wr, 6 op/s
Jan 23 19:08:19 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "384e95a6-7587-4e64-943f-75b52be57cc0", "format": "json"}]: dispatch
Jan 23 19:08:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:384e95a6-7587-4e64-943f-75b52be57cc0, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:384e95a6-7587-4e64-943f-75b52be57cc0, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:19 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9a2901e0-1e2a-431b-ad0b-0539c4b54da0", "format": "json"}]: dispatch
Jan 23 19:08:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9a2901e0-1e2a-431b-ad0b-0539c4b54da0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9a2901e0-1e2a-431b-ad0b-0539c4b54da0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:19 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:08:19.830+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9a2901e0-1e2a-431b-ad0b-0539c4b54da0' of type subvolume
Jan 23 19:08:19 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9a2901e0-1e2a-431b-ad0b-0539c4b54da0' of type subvolume
Jan 23 19:08:19 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9a2901e0-1e2a-431b-ad0b-0539c4b54da0", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9a2901e0-1e2a-431b-ad0b-0539c4b54da0, vol_name:cephfs) < ""
Jan 23 19:08:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9a2901e0-1e2a-431b-ad0b-0539c4b54da0'' moved to trashcan
Jan 23 19:08:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:08:19 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9a2901e0-1e2a-431b-ad0b-0539c4b54da0, vol_name:cephfs) < ""
Jan 23 19:08:20 compute-0 podman[252269]: 2026-01-23 19:08:20.006790049 +0000 UTC m=+0.085392476 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 19:08:20 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "82ea23d8-4358-4944-b79d-0263b8fedbbf", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:08:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:82ea23d8-4358-4944-b79d-0263b8fedbbf, vol_name:cephfs) < ""
Jan 23 19:08:20 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/82ea23d8-4358-4944-b79d-0263b8fedbbf/891bbab1-e8b8-4240-8971-1cd48fa45d78'.
Jan 23 19:08:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/82ea23d8-4358-4944-b79d-0263b8fedbbf/.meta.tmp'
Jan 23 19:08:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/82ea23d8-4358-4944-b79d-0263b8fedbbf/.meta.tmp' to config b'/volumes/_nogroup/82ea23d8-4358-4944-b79d-0263b8fedbbf/.meta'
Jan 23 19:08:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:82ea23d8-4358-4944-b79d-0263b8fedbbf, vol_name:cephfs) < ""
Jan 23 19:08:20 compute-0 ceph-mon[75097]: pgmap v1155: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 72 KiB/s wr, 6 op/s
Jan 23 19:08:20 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "384e95a6-7587-4e64-943f-75b52be57cc0", "format": "json"}]: dispatch
Jan 23 19:08:20 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "82ea23d8-4358-4944-b79d-0263b8fedbbf", "format": "json"}]: dispatch
Jan 23 19:08:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:82ea23d8-4358-4944-b79d-0263b8fedbbf, vol_name:cephfs) < ""
Jan 23 19:08:20 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:82ea23d8-4358-4944-b79d-0263b8fedbbf, vol_name:cephfs) < ""
Jan 23 19:08:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:08:20 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:08:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 99 KiB/s wr, 9 op/s
Jan 23 19:08:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 23 19:08:21 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9a2901e0-1e2a-431b-ad0b-0539c4b54da0", "format": "json"}]: dispatch
Jan 23 19:08:21 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9a2901e0-1e2a-431b-ad0b-0539c4b54da0", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:21 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "82ea23d8-4358-4944-b79d-0263b8fedbbf", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:08:21 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "82ea23d8-4358-4944-b79d-0263b8fedbbf", "format": "json"}]: dispatch
Jan 23 19:08:21 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:08:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 23 19:08:21 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 23 19:08:22 compute-0 ceph-mon[75097]: pgmap v1156: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 99 KiB/s wr, 9 op/s
Jan 23 19:08:22 compute-0 ceph-mon[75097]: osdmap e151: 3 total, 3 up, 3 in
Jan 23 19:08:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 77 KiB/s wr, 6 op/s
Jan 23 19:08:23 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "71a90bd9-cf43-41a3-9d27-2205ff800090", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:08:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:71a90bd9-cf43-41a3-9d27-2205ff800090, vol_name:cephfs) < ""
Jan 23 19:08:23 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/71a90bd9-cf43-41a3-9d27-2205ff800090/a74f09d5-6405-402c-9941-199927d22561'.
Jan 23 19:08:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/71a90bd9-cf43-41a3-9d27-2205ff800090/.meta.tmp'
Jan 23 19:08:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/71a90bd9-cf43-41a3-9d27-2205ff800090/.meta.tmp' to config b'/volumes/_nogroup/71a90bd9-cf43-41a3-9d27-2205ff800090/.meta'
Jan 23 19:08:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:71a90bd9-cf43-41a3-9d27-2205ff800090, vol_name:cephfs) < ""
Jan 23 19:08:23 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "71a90bd9-cf43-41a3-9d27-2205ff800090", "format": "json"}]: dispatch
Jan 23 19:08:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:71a90bd9-cf43-41a3-9d27-2205ff800090, vol_name:cephfs) < ""
Jan 23 19:08:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:71a90bd9-cf43-41a3-9d27-2205ff800090, vol_name:cephfs) < ""
Jan 23 19:08:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:08:23 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:08:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:08:23 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "82ea23d8-4358-4944-b79d-0263b8fedbbf", "format": "json"}]: dispatch
Jan 23 19:08:23 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:82ea23d8-4358-4944-b79d-0263b8fedbbf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:82ea23d8-4358-4944-b79d-0263b8fedbbf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:24 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:08:23.999+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '82ea23d8-4358-4944-b79d-0263b8fedbbf' of type subvolume
Jan 23 19:08:24 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '82ea23d8-4358-4944-b79d-0263b8fedbbf' of type subvolume
Jan 23 19:08:24 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "82ea23d8-4358-4944-b79d-0263b8fedbbf", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:82ea23d8-4358-4944-b79d-0263b8fedbbf, vol_name:cephfs) < ""
Jan 23 19:08:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/82ea23d8-4358-4944-b79d-0263b8fedbbf'' moved to trashcan
Jan 23 19:08:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:08:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:82ea23d8-4358-4944-b79d-0263b8fedbbf, vol_name:cephfs) < ""
Jan 23 19:08:24 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "384e95a6-7587-4e64-943f-75b52be57cc0_769b22c4-afb8-4b6e-b8a0-d29e1651ab74", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:384e95a6-7587-4e64-943f-75b52be57cc0_769b22c4-afb8-4b6e-b8a0-d29e1651ab74, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp'
Jan 23 19:08:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp' to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta'
Jan 23 19:08:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:384e95a6-7587-4e64-943f-75b52be57cc0_769b22c4-afb8-4b6e-b8a0-d29e1651ab74, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:24 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "384e95a6-7587-4e64-943f-75b52be57cc0", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:384e95a6-7587-4e64-943f-75b52be57cc0, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:24 compute-0 ceph-mon[75097]: pgmap v1158: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 77 KiB/s wr, 6 op/s
Jan 23 19:08:24 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "71a90bd9-cf43-41a3-9d27-2205ff800090", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:08:24 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "71a90bd9-cf43-41a3-9d27-2205ff800090", "format": "json"}]: dispatch
Jan 23 19:08:24 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:08:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp'
Jan 23 19:08:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp' to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta'
Jan 23 19:08:24 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:384e95a6-7587-4e64-943f-75b52be57cc0, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 80 KiB/s wr, 7 op/s
Jan 23 19:08:25 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "82ea23d8-4358-4944-b79d-0263b8fedbbf", "format": "json"}]: dispatch
Jan 23 19:08:25 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "82ea23d8-4358-4944-b79d-0263b8fedbbf", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:25 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "384e95a6-7587-4e64-943f-75b52be57cc0_769b22c4-afb8-4b6e-b8a0-d29e1651ab74", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:25 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "384e95a6-7587-4e64-943f-75b52be57cc0", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:26 compute-0 ceph-mon[75097]: pgmap v1159: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 80 KiB/s wr, 7 op/s
Jan 23 19:08:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 80 KiB/s wr, 7 op/s
Jan 23 19:08:27 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4b279f6d-4f8f-42f5-a19b-c361caa46a7c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:08:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4b279f6d-4f8f-42f5-a19b-c361caa46a7c, vol_name:cephfs) < ""
Jan 23 19:08:27 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/4b279f6d-4f8f-42f5-a19b-c361caa46a7c/96b99af2-fe85-4aa6-8b8a-0cf0e67a4756'.
Jan 23 19:08:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4b279f6d-4f8f-42f5-a19b-c361caa46a7c/.meta.tmp'
Jan 23 19:08:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4b279f6d-4f8f-42f5-a19b-c361caa46a7c/.meta.tmp' to config b'/volumes/_nogroup/4b279f6d-4f8f-42f5-a19b-c361caa46a7c/.meta'
Jan 23 19:08:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4b279f6d-4f8f-42f5-a19b-c361caa46a7c, vol_name:cephfs) < ""
Jan 23 19:08:27 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4b279f6d-4f8f-42f5-a19b-c361caa46a7c", "format": "json"}]: dispatch
Jan 23 19:08:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4b279f6d-4f8f-42f5-a19b-c361caa46a7c, vol_name:cephfs) < ""
Jan 23 19:08:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4b279f6d-4f8f-42f5-a19b-c361caa46a7c, vol_name:cephfs) < ""
Jan 23 19:08:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:08:27 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:08:27 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "0d41606b-3c6a-49d1-ae8f-8d8fc9ec2007", "format": "json"}]: dispatch
Jan 23 19:08:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0d41606b-3c6a-49d1-ae8f-8d8fc9ec2007, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:27 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0d41606b-3c6a-49d1-ae8f-8d8fc9ec2007, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:28 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "71a90bd9-cf43-41a3-9d27-2205ff800090", "format": "json"}]: dispatch
Jan 23 19:08:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:71a90bd9-cf43-41a3-9d27-2205ff800090, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:71a90bd9-cf43-41a3-9d27-2205ff800090, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:28 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:08:28.003+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '71a90bd9-cf43-41a3-9d27-2205ff800090' of type subvolume
Jan 23 19:08:28 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '71a90bd9-cf43-41a3-9d27-2205ff800090' of type subvolume
Jan 23 19:08:28 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "71a90bd9-cf43-41a3-9d27-2205ff800090", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:71a90bd9-cf43-41a3-9d27-2205ff800090, vol_name:cephfs) < ""
Jan 23 19:08:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/71a90bd9-cf43-41a3-9d27-2205ff800090'' moved to trashcan
Jan 23 19:08:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:08:28 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:71a90bd9-cf43-41a3-9d27-2205ff800090, vol_name:cephfs) < ""
Jan 23 19:08:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:08:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Jan 23 19:08:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Jan 23 19:08:28 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Jan 23 19:08:28 compute-0 ceph-mon[75097]: pgmap v1160: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 80 KiB/s wr, 7 op/s
Jan 23 19:08:28 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4b279f6d-4f8f-42f5-a19b-c361caa46a7c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:08:28 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4b279f6d-4f8f-42f5-a19b-c361caa46a7c", "format": "json"}]: dispatch
Jan 23 19:08:28 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:08:28 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "0d41606b-3c6a-49d1-ae8f-8d8fc9ec2007", "format": "json"}]: dispatch
Jan 23 19:08:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 59 KiB/s wr, 5 op/s
Jan 23 19:08:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:08:28
Jan 23 19:08:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:08:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:08:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'volumes', 'images', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root']
Jan 23 19:08:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:08:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Jan 23 19:08:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Jan 23 19:08:29 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Jan 23 19:08:29 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "71a90bd9-cf43-41a3-9d27-2205ff800090", "format": "json"}]: dispatch
Jan 23 19:08:29 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "71a90bd9-cf43-41a3-9d27-2205ff800090", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:29 compute-0 ceph-mon[75097]: osdmap e152: 3 total, 3 up, 3 in
Jan 23 19:08:29 compute-0 ceph-mon[75097]: osdmap e153: 3 total, 3 up, 3 in
Jan 23 19:08:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:08:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:08:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:08:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:08:30 compute-0 ceph-mon[75097]: pgmap v1162: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 59 KiB/s wr, 5 op/s
Jan 23 19:08:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:08:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f0697aa8160>)]
Jan 23 19:08:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 23 19:08:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 130 KiB/s wr, 10 op/s
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c5afb25c-aa10-424c-a2b9-1de8cd95ec12", "format": "json"}]: dispatch
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c5afb25c-aa10-424c-a2b9-1de8cd95ec12, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c5afb25c-aa10-424c-a2b9-1de8cd95ec12, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c5afb25c-aa10-424c-a2b9-1de8cd95ec12", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c5afb25c-aa10-424c-a2b9-1de8cd95ec12, vol_name:cephfs) < ""
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c5afb25c-aa10-424c-a2b9-1de8cd95ec12'' moved to trashcan
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c5afb25c-aa10-424c-a2b9-1de8cd95ec12, vol_name:cephfs) < ""
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "0d41606b-3c6a-49d1-ae8f-8d8fc9ec2007_92391ed6-22d5-4be9-ab4f-a3e9253c123a", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:31 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0d41606b-3c6a-49d1-ae8f-8d8fc9ec2007_92391ed6-22d5-4be9-ab4f-a3e9253c123a, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp'
Jan 23 19:08:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp' to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta'
Jan 23 19:08:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0d41606b-3c6a-49d1-ae8f-8d8fc9ec2007_92391ed6-22d5-4be9-ab4f-a3e9253c123a, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:32 compute-0 ceph-mon[75097]: pgmap v1164: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 130 KiB/s wr, 10 op/s
Jan 23 19:08:32 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c5afb25c-aa10-424c-a2b9-1de8cd95ec12", "format": "json"}]: dispatch
Jan 23 19:08:32 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c5afb25c-aa10-424c-a2b9-1de8cd95ec12", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:32 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "0d41606b-3c6a-49d1-ae8f-8d8fc9ec2007_92391ed6-22d5-4be9-ab4f-a3e9253c123a", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:32 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "0d41606b-3c6a-49d1-ae8f-8d8fc9ec2007", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0d41606b-3c6a-49d1-ae8f-8d8fc9ec2007, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp'
Jan 23 19:08:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp' to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta'
Jan 23 19:08:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0d41606b-3c6a-49d1-ae8f-8d8fc9ec2007, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:32 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4b279f6d-4f8f-42f5-a19b-c361caa46a7c", "format": "json"}]: dispatch
Jan 23 19:08:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4b279f6d-4f8f-42f5-a19b-c361caa46a7c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4b279f6d-4f8f-42f5-a19b-c361caa46a7c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:32 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:08:32.412+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4b279f6d-4f8f-42f5-a19b-c361caa46a7c' of type subvolume
Jan 23 19:08:32 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4b279f6d-4f8f-42f5-a19b-c361caa46a7c' of type subvolume
Jan 23 19:08:32 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4b279f6d-4f8f-42f5-a19b-c361caa46a7c", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4b279f6d-4f8f-42f5-a19b-c361caa46a7c, vol_name:cephfs) < ""
Jan 23 19:08:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4b279f6d-4f8f-42f5-a19b-c361caa46a7c'' moved to trashcan
Jan 23 19:08:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:08:32 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4b279f6d-4f8f-42f5-a19b-c361caa46a7c, vol_name:cephfs) < ""
Jan 23 19:08:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 72 KiB/s wr, 5 op/s
Jan 23 19:08:32 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.rszfwe(active, since 34m)
Jan 23 19:08:33 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "0d41606b-3c6a-49d1-ae8f-8d8fc9ec2007", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:33 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4b279f6d-4f8f-42f5-a19b-c361caa46a7c", "format": "json"}]: dispatch
Jan 23 19:08:33 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4b279f6d-4f8f-42f5-a19b-c361caa46a7c", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:33 compute-0 ceph-mon[75097]: mgrmap e15: compute-0.rszfwe(active, since 34m)
Jan 23 19:08:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:08:34 compute-0 ceph-mon[75097]: pgmap v1165: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 72 KiB/s wr, 5 op/s
Jan 23 19:08:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 126 KiB/s wr, 10 op/s
Jan 23 19:08:34 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c09bd88d-6bf6-440e-b929-87c3d463c101", "snap_name": "f7119362-56c6-41a3-a825-295c8385d456_2e22f4cc-505c-4e71-9134-2fa711cee58c", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f7119362-56c6-41a3-a825-295c8385d456_2e22f4cc-505c-4e71-9134-2fa711cee58c, sub_name:c09bd88d-6bf6-440e-b929-87c3d463c101, vol_name:cephfs) < ""
Jan 23 19:08:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c09bd88d-6bf6-440e-b929-87c3d463c101/.meta.tmp'
Jan 23 19:08:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c09bd88d-6bf6-440e-b929-87c3d463c101/.meta.tmp' to config b'/volumes/_nogroup/c09bd88d-6bf6-440e-b929-87c3d463c101/.meta'
Jan 23 19:08:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f7119362-56c6-41a3-a825-295c8385d456_2e22f4cc-505c-4e71-9134-2fa711cee58c, sub_name:c09bd88d-6bf6-440e-b929-87c3d463c101, vol_name:cephfs) < ""
Jan 23 19:08:34 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c09bd88d-6bf6-440e-b929-87c3d463c101", "snap_name": "f7119362-56c6-41a3-a825-295c8385d456", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f7119362-56c6-41a3-a825-295c8385d456, sub_name:c09bd88d-6bf6-440e-b929-87c3d463c101, vol_name:cephfs) < ""
Jan 23 19:08:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c09bd88d-6bf6-440e-b929-87c3d463c101/.meta.tmp'
Jan 23 19:08:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c09bd88d-6bf6-440e-b929-87c3d463c101/.meta.tmp' to config b'/volumes/_nogroup/c09bd88d-6bf6-440e-b929-87c3d463c101/.meta'
Jan 23 19:08:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f7119362-56c6-41a3-a825-295c8385d456, sub_name:c09bd88d-6bf6-440e-b929-87c3d463c101, vol_name:cephfs) < ""
Jan 23 19:08:34 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "7a9b1bc3-c5b4-4ffd-bcd5-79b532a504e0", "format": "json"}]: dispatch
Jan 23 19:08:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7a9b1bc3-c5b4-4ffd-bcd5-79b532a504e0, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:34 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7a9b1bc3-c5b4-4ffd-bcd5-79b532a504e0, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:35 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a1da1abe-af26-445a-9262-0ff686ee8203", "snap_name": "c72bee12-7f03-4342-a0eb-04d39fa0a5a4_117dfa48-4122-43f1-91a6-c0069747cede", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c72bee12-7f03-4342-a0eb-04d39fa0a5a4_117dfa48-4122-43f1-91a6-c0069747cede, sub_name:a1da1abe-af26-445a-9262-0ff686ee8203, vol_name:cephfs) < ""
Jan 23 19:08:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a1da1abe-af26-445a-9262-0ff686ee8203/.meta.tmp'
Jan 23 19:08:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a1da1abe-af26-445a-9262-0ff686ee8203/.meta.tmp' to config b'/volumes/_nogroup/a1da1abe-af26-445a-9262-0ff686ee8203/.meta'
Jan 23 19:08:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c72bee12-7f03-4342-a0eb-04d39fa0a5a4_117dfa48-4122-43f1-91a6-c0069747cede, sub_name:a1da1abe-af26-445a-9262-0ff686ee8203, vol_name:cephfs) < ""
Jan 23 19:08:35 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a1da1abe-af26-445a-9262-0ff686ee8203", "snap_name": "c72bee12-7f03-4342-a0eb-04d39fa0a5a4", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c72bee12-7f03-4342-a0eb-04d39fa0a5a4, sub_name:a1da1abe-af26-445a-9262-0ff686ee8203, vol_name:cephfs) < ""
Jan 23 19:08:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a1da1abe-af26-445a-9262-0ff686ee8203/.meta.tmp'
Jan 23 19:08:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a1da1abe-af26-445a-9262-0ff686ee8203/.meta.tmp' to config b'/volumes/_nogroup/a1da1abe-af26-445a-9262-0ff686ee8203/.meta'
Jan 23 19:08:35 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c72bee12-7f03-4342-a0eb-04d39fa0a5a4, sub_name:a1da1abe-af26-445a-9262-0ff686ee8203, vol_name:cephfs) < ""
Jan 23 19:08:36 compute-0 ceph-mon[75097]: pgmap v1166: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 126 KiB/s wr, 10 op/s
Jan 23 19:08:36 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c09bd88d-6bf6-440e-b929-87c3d463c101", "snap_name": "f7119362-56c6-41a3-a825-295c8385d456_2e22f4cc-505c-4e71-9134-2fa711cee58c", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:36 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c09bd88d-6bf6-440e-b929-87c3d463c101", "snap_name": "f7119362-56c6-41a3-a825-295c8385d456", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:36 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "7a9b1bc3-c5b4-4ffd-bcd5-79b532a504e0", "format": "json"}]: dispatch
Jan 23 19:08:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 494 B/s rd, 122 KiB/s wr, 10 op/s
Jan 23 19:08:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Jan 23 19:08:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Jan 23 19:08:37 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a1da1abe-af26-445a-9262-0ff686ee8203", "snap_name": "c72bee12-7f03-4342-a0eb-04d39fa0a5a4_117dfa48-4122-43f1-91a6-c0069747cede", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:37 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a1da1abe-af26-445a-9262-0ff686ee8203", "snap_name": "c72bee12-7f03-4342-a0eb-04d39fa0a5a4", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:37 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:08:37.124305) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195317124347, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1344, "num_deletes": 253, "total_data_size": 1913711, "memory_usage": 1942504, "flush_reason": "Manual Compaction"}
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195317142597, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1871465, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24995, "largest_seqno": 26338, "table_properties": {"data_size": 1864987, "index_size": 3553, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15290, "raw_average_key_size": 20, "raw_value_size": 1851331, "raw_average_value_size": 2529, "num_data_blocks": 159, "num_entries": 732, "num_filter_entries": 732, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769195224, "oldest_key_time": 1769195224, "file_creation_time": 1769195317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 18344 microseconds, and 4891 cpu microseconds.
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:08:37.142648) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1871465 bytes OK
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:08:37.142667) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:08:37.146050) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:08:37.146073) EVENT_LOG_v1 {"time_micros": 1769195317146065, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:08:37.146092) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1907410, prev total WAL file size 1907410, number of live WAL files 2.
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:08:37.146698) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1827KB)], [56(10043KB)]
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195317146735, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12156172, "oldest_snapshot_seqno": -1}
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5715 keys, 10501836 bytes, temperature: kUnknown
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195317237860, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 10501836, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10459863, "index_size": 26567, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14341, "raw_key_size": 142487, "raw_average_key_size": 24, "raw_value_size": 10353696, "raw_average_value_size": 1811, "num_data_blocks": 1105, "num_entries": 5715, "num_filter_entries": 5715, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 1769195317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:08:37.238135) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10501836 bytes
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:08:37.241056) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.3 rd, 115.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.8 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(12.1) write-amplify(5.6) OK, records in: 6240, records dropped: 525 output_compression: NoCompression
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:08:37.241118) EVENT_LOG_v1 {"time_micros": 1769195317241095, "job": 30, "event": "compaction_finished", "compaction_time_micros": 91226, "compaction_time_cpu_micros": 23300, "output_level": 6, "num_output_files": 1, "total_output_size": 10501836, "num_input_records": 6240, "num_output_records": 5715, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195317241636, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195317243549, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:08:37.146626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:08:37.243601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:08:37.243605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:08:37.243606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:08:37.243608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:08:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:08:37.243609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:08:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Jan 23 19:08:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Jan 23 19:08:38 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Jan 23 19:08:38 compute-0 ceph-mon[75097]: pgmap v1167: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 494 B/s rd, 122 KiB/s wr, 10 op/s
Jan 23 19:08:38 compute-0 ceph-mon[75097]: osdmap e154: 3 total, 3 up, 3 in
Jan 23 19:08:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:08:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Jan 23 19:08:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Jan 23 19:08:38 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Jan 23 19:08:38 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c09bd88d-6bf6-440e-b929-87c3d463c101", "format": "json"}]: dispatch
Jan 23 19:08:38 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c09bd88d-6bf6-440e-b929-87c3d463c101, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:38 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c09bd88d-6bf6-440e-b929-87c3d463c101, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:38 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:08:38.428+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c09bd88d-6bf6-440e-b929-87c3d463c101' of type subvolume
Jan 23 19:08:38 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c09bd88d-6bf6-440e-b929-87c3d463c101' of type subvolume
Jan 23 19:08:38 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c09bd88d-6bf6-440e-b929-87c3d463c101", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:38 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c09bd88d-6bf6-440e-b929-87c3d463c101, vol_name:cephfs) < ""
Jan 23 19:08:38 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c09bd88d-6bf6-440e-b929-87c3d463c101'' moved to trashcan
Jan 23 19:08:38 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:08:38 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c09bd88d-6bf6-440e-b929-87c3d463c101, vol_name:cephfs) < ""
Jan 23 19:08:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 73 KiB/s wr, 6 op/s
Jan 23 19:08:39 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a1da1abe-af26-445a-9262-0ff686ee8203", "format": "json"}]: dispatch
Jan 23 19:08:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a1da1abe-af26-445a-9262-0ff686ee8203, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a1da1abe-af26-445a-9262-0ff686ee8203, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:08:39 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:08:39.052+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a1da1abe-af26-445a-9262-0ff686ee8203' of type subvolume
Jan 23 19:08:39 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a1da1abe-af26-445a-9262-0ff686ee8203' of type subvolume
Jan 23 19:08:39 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a1da1abe-af26-445a-9262-0ff686ee8203", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a1da1abe-af26-445a-9262-0ff686ee8203, vol_name:cephfs) < ""
Jan 23 19:08:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a1da1abe-af26-445a-9262-0ff686ee8203'' moved to trashcan
Jan 23 19:08:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:08:39 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a1da1abe-af26-445a-9262-0ff686ee8203, vol_name:cephfs) < ""
Jan 23 19:08:39 compute-0 ceph-mon[75097]: osdmap e155: 3 total, 3 up, 3 in
Jan 23 19:08:39 compute-0 ceph-mon[75097]: osdmap e156: 3 total, 3 up, 3 in
Jan 23 19:08:39 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c09bd88d-6bf6-440e-b929-87c3d463c101", "format": "json"}]: dispatch
Jan 23 19:08:39 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c09bd88d-6bf6-440e-b929-87c3d463c101", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:40 compute-0 ceph-mon[75097]: pgmap v1171: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 73 KiB/s wr, 6 op/s
Jan 23 19:08:40 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a1da1abe-af26-445a-9262-0ff686ee8203", "format": "json"}]: dispatch
Jan 23 19:08:40 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a1da1abe-af26-445a-9262-0ff686ee8203", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "7a9b1bc3-c5b4-4ffd-bcd5-79b532a504e0_59ad9788-d464-4c36-8276-7b68dd1c408e", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a9b1bc3-c5b4-4ffd-bcd5-79b532a504e0_59ad9788-d464-4c36-8276-7b68dd1c408e, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp'
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp' to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta'
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a9b1bc3-c5b4-4ffd-bcd5-79b532a504e0_59ad9788-d464-4c36-8276-7b68dd1c408e, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "7a9b1bc3-c5b4-4ffd-bcd5-79b532a504e0", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a9b1bc3-c5b4-4ffd-bcd5-79b532a504e0, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp'
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp' to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta'
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a9b1bc3-c5b4-4ffd-bcd5-79b532a504e0, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006658925654297114 of space, bias 1.0, pg target 0.1997677696289134 quantized to 32 (current 32)
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00046411378468869135 of space, bias 4.0, pg target 0.5569365416264296 quantized to 16 (current 16)
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00017169491111545225 quantized to 32 (current 32)
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:08:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 92 KiB/s wr, 8 op/s
Jan 23 19:08:41 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "7a9b1bc3-c5b4-4ffd-bcd5-79b532a504e0_59ad9788-d464-4c36-8276-7b68dd1c408e", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:41 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "7a9b1bc3-c5b4-4ffd-bcd5-79b532a504e0", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:41 compute-0 podman[252291]: 2026-01-23 19:08:41.997426032 +0000 UTC m=+0.078577970 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:08:42 compute-0 ceph-mon[75097]: pgmap v1172: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 92 KiB/s wr, 8 op/s
Jan 23 19:08:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 92 KiB/s wr, 8 op/s
Jan 23 19:08:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:08:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Jan 23 19:08:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Jan 23 19:08:43 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Jan 23 19:08:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Jan 23 19:08:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Jan 23 19:08:44 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Jan 23 19:08:44 compute-0 ceph-mon[75097]: pgmap v1173: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 92 KiB/s wr, 8 op/s
Jan 23 19:08:44 compute-0 ceph-mon[75097]: osdmap e157: 3 total, 3 up, 3 in
Jan 23 19:08:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 70 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 145 KiB/s wr, 15 op/s
Jan 23 19:08:45 compute-0 ceph-mon[75097]: osdmap e158: 3 total, 3 up, 3 in
Jan 23 19:08:45 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "fff171a8-cb0e-434d-aaef-163dc51e46db", "format": "json"}]: dispatch
Jan 23 19:08:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fff171a8-cb0e-434d-aaef-163dc51e46db, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:45 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fff171a8-cb0e-434d-aaef-163dc51e46db, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:46 compute-0 ceph-mon[75097]: pgmap v1176: 305 pgs: 305 active+clean; 70 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 145 KiB/s wr, 15 op/s
Jan 23 19:08:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 70 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 114 KiB/s wr, 12 op/s
Jan 23 19:08:47 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "fff171a8-cb0e-434d-aaef-163dc51e46db", "format": "json"}]: dispatch
Jan 23 19:08:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:08:48 compute-0 ceph-mon[75097]: pgmap v1177: 305 pgs: 305 active+clean; 70 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 114 KiB/s wr, 12 op/s
Jan 23 19:08:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 70 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 44 KiB/s wr, 5 op/s
Jan 23 19:08:50 compute-0 ceph-mon[75097]: pgmap v1178: 305 pgs: 305 active+clean; 70 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 44 KiB/s wr, 5 op/s
Jan 23 19:08:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 71 KiB/s wr, 6 op/s
Jan 23 19:08:50 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cb4de7c2-b53e-4f25-93d6-250c4ea8ac22", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:08:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cb4de7c2-b53e-4f25-93d6-250c4ea8ac22, vol_name:cephfs) < ""
Jan 23 19:08:50 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/cb4de7c2-b53e-4f25-93d6-250c4ea8ac22/43852489-d26f-4f74-ae80-28dc4ab27e85'.
Jan 23 19:08:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cb4de7c2-b53e-4f25-93d6-250c4ea8ac22/.meta.tmp'
Jan 23 19:08:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cb4de7c2-b53e-4f25-93d6-250c4ea8ac22/.meta.tmp' to config b'/volumes/_nogroup/cb4de7c2-b53e-4f25-93d6-250c4ea8ac22/.meta'
Jan 23 19:08:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cb4de7c2-b53e-4f25-93d6-250c4ea8ac22, vol_name:cephfs) < ""
Jan 23 19:08:50 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cb4de7c2-b53e-4f25-93d6-250c4ea8ac22", "format": "json"}]: dispatch
Jan 23 19:08:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cb4de7c2-b53e-4f25-93d6-250c4ea8ac22, vol_name:cephfs) < ""
Jan 23 19:08:50 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cb4de7c2-b53e-4f25-93d6-250c4ea8ac22, vol_name:cephfs) < ""
Jan 23 19:08:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:08:50 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:08:50 compute-0 podman[252318]: 2026-01-23 19:08:50.988719025 +0000 UTC m=+0.071557288 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 23 19:08:51 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "fff171a8-cb0e-434d-aaef-163dc51e46db_4b69fd96-9531-494b-a7cb-cb4682a0d87d", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fff171a8-cb0e-434d-aaef-163dc51e46db_4b69fd96-9531-494b-a7cb-cb4682a0d87d, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp'
Jan 23 19:08:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp' to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta'
Jan 23 19:08:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fff171a8-cb0e-434d-aaef-163dc51e46db_4b69fd96-9531-494b-a7cb-cb4682a0d87d, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:51 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "fff171a8-cb0e-434d-aaef-163dc51e46db", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fff171a8-cb0e-434d-aaef-163dc51e46db, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp'
Jan 23 19:08:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp' to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta'
Jan 23 19:08:51 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fff171a8-cb0e-434d-aaef-163dc51e46db, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:51 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:08:52 compute-0 ceph-mon[75097]: pgmap v1179: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 71 KiB/s wr, 6 op/s
Jan 23 19:08:52 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cb4de7c2-b53e-4f25-93d6-250c4ea8ac22", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:08:52 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cb4de7c2-b53e-4f25-93d6-250c4ea8ac22", "format": "json"}]: dispatch
Jan 23 19:08:52 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "fff171a8-cb0e-434d-aaef-163dc51e46db_4b69fd96-9531-494b-a7cb-cb4682a0d87d", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:52 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "fff171a8-cb0e-434d-aaef-163dc51e46db", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 551 B/s rd, 61 KiB/s wr, 5 op/s
Jan 23 19:08:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:08:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Jan 23 19:08:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Jan 23 19:08:53 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Jan 23 19:08:54 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "cb4de7c2-b53e-4f25-93d6-250c4ea8ac22", "snap_name": "37fb13a4-3bbe-4617-bd83-2f119a82680c", "format": "json"}]: dispatch
Jan 23 19:08:54 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:37fb13a4-3bbe-4617-bd83-2f119a82680c, sub_name:cb4de7c2-b53e-4f25-93d6-250c4ea8ac22, vol_name:cephfs) < ""
Jan 23 19:08:54 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:37fb13a4-3bbe-4617-bd83-2f119a82680c, sub_name:cb4de7c2-b53e-4f25-93d6-250c4ea8ac22, vol_name:cephfs) < ""
Jan 23 19:08:54 compute-0 ceph-mon[75097]: pgmap v1180: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 551 B/s rd, 61 KiB/s wr, 5 op/s
Jan 23 19:08:54 compute-0 ceph-mon[75097]: osdmap e159: 3 total, 3 up, 3 in
Jan 23 19:08:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 50 KiB/s wr, 3 op/s
Jan 23 19:08:55 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "cb4de7c2-b53e-4f25-93d6-250c4ea8ac22", "snap_name": "37fb13a4-3bbe-4617-bd83-2f119a82680c", "format": "json"}]: dispatch
Jan 23 19:08:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Jan 23 19:08:56 compute-0 ceph-mon[75097]: pgmap v1182: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 50 KiB/s wr, 3 op/s
Jan 23 19:08:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Jan 23 19:08:56 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Jan 23 19:08:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 63 KiB/s wr, 4 op/s
Jan 23 19:08:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:08:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/142677203' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:08:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:08:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/142677203' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:08:57 compute-0 nova_compute[240119]: 2026-01-23 19:08:57.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:08:57 compute-0 nova_compute[240119]: 2026-01-23 19:08:57.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 23 19:08:57 compute-0 nova_compute[240119]: 2026-01-23 19:08:57.509 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 23 19:08:57 compute-0 ceph-mon[75097]: osdmap e160: 3 total, 3 up, 3 in
Jan 23 19:08:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/142677203' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:08:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/142677203' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:08:57 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "aea27463-e745-4cdf-b933-f34265646329_07765c7c-c79c-4c9c-b4e1-eb82fb30e21c", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aea27463-e745-4cdf-b933-f34265646329_07765c7c-c79c-4c9c-b4e1-eb82fb30e21c, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp'
Jan 23 19:08:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp' to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta'
Jan 23 19:08:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aea27463-e745-4cdf-b933-f34265646329_07765c7c-c79c-4c9c-b4e1-eb82fb30e21c, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:57 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "aea27463-e745-4cdf-b933-f34265646329", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aea27463-e745-4cdf-b933-f34265646329, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp'
Jan 23 19:08:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta.tmp' to config b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5/.meta'
Jan 23 19:08:57 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aea27463-e745-4cdf-b933-f34265646329, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:08:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:08:58 compute-0 ceph-mon[75097]: pgmap v1184: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 63 KiB/s wr, 4 op/s
Jan 23 19:08:58 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "aea27463-e745-4cdf-b933-f34265646329_07765c7c-c79c-4c9c-b4e1-eb82fb30e21c", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:58 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "snap_name": "aea27463-e745-4cdf-b933-f34265646329", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 37 KiB/s wr, 3 op/s
Jan 23 19:08:58 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cb4de7c2-b53e-4f25-93d6-250c4ea8ac22", "snap_name": "37fb13a4-3bbe-4617-bd83-2f119a82680c_96b82404-033c-404f-8259-7402ea1ce023", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:37fb13a4-3bbe-4617-bd83-2f119a82680c_96b82404-033c-404f-8259-7402ea1ce023, sub_name:cb4de7c2-b53e-4f25-93d6-250c4ea8ac22, vol_name:cephfs) < ""
Jan 23 19:08:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cb4de7c2-b53e-4f25-93d6-250c4ea8ac22/.meta.tmp'
Jan 23 19:08:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cb4de7c2-b53e-4f25-93d6-250c4ea8ac22/.meta.tmp' to config b'/volumes/_nogroup/cb4de7c2-b53e-4f25-93d6-250c4ea8ac22/.meta'
Jan 23 19:08:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:37fb13a4-3bbe-4617-bd83-2f119a82680c_96b82404-033c-404f-8259-7402ea1ce023, sub_name:cb4de7c2-b53e-4f25-93d6-250c4ea8ac22, vol_name:cephfs) < ""
Jan 23 19:08:58 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cb4de7c2-b53e-4f25-93d6-250c4ea8ac22", "snap_name": "37fb13a4-3bbe-4617-bd83-2f119a82680c", "force": true, "format": "json"}]: dispatch
Jan 23 19:08:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:37fb13a4-3bbe-4617-bd83-2f119a82680c, sub_name:cb4de7c2-b53e-4f25-93d6-250c4ea8ac22, vol_name:cephfs) < ""
Jan 23 19:08:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cb4de7c2-b53e-4f25-93d6-250c4ea8ac22/.meta.tmp'
Jan 23 19:08:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cb4de7c2-b53e-4f25-93d6-250c4ea8ac22/.meta.tmp' to config b'/volumes/_nogroup/cb4de7c2-b53e-4f25-93d6-250c4ea8ac22/.meta'
Jan 23 19:08:58 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:37fb13a4-3bbe-4617-bd83-2f119a82680c, sub_name:cb4de7c2-b53e-4f25-93d6-250c4ea8ac22, vol_name:cephfs) < ""
Jan 23 19:09:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:09:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:09:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:09:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f0697aa8550>)]
Jan 23 19:09:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 23 19:09:00 compute-0 ceph-mon[75097]: pgmap v1185: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 37 KiB/s wr, 3 op/s
Jan 23 19:09:00 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cb4de7c2-b53e-4f25-93d6-250c4ea8ac22", "snap_name": "37fb13a4-3bbe-4617-bd83-2f119a82680c_96b82404-033c-404f-8259-7402ea1ce023", "force": true, "format": "json"}]: dispatch
Jan 23 19:09:00 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cb4de7c2-b53e-4f25-93d6-250c4ea8ac22", "snap_name": "37fb13a4-3bbe-4617-bd83-2f119a82680c", "force": true, "format": "json"}]: dispatch
Jan 23 19:09:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 86 KiB/s wr, 6 op/s
Jan 23 19:09:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:09:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f069bb473d0>)]
Jan 23 19:09:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 23 19:09:01 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "format": "json"}]: dispatch
Jan 23 19:09:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:09:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:09:01 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:09:01.092+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b8a58bec-b849-4bc6-8dba-3cc458df53f5' of type subvolume
Jan 23 19:09:01 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b8a58bec-b849-4bc6-8dba-3cc458df53f5' of type subvolume
Jan 23 19:09:01 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "force": true, "format": "json"}]: dispatch
Jan 23 19:09:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:09:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b8a58bec-b849-4bc6-8dba-3cc458df53f5'' moved to trashcan
Jan 23 19:09:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:09:01 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b8a58bec-b849-4bc6-8dba-3cc458df53f5, vol_name:cephfs) < ""
Jan 23 19:09:01 compute-0 nova_compute[240119]: 2026-01-23 19:09:01.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:09:01 compute-0 nova_compute[240119]: 2026-01-23 19:09:01.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:09:01 compute-0 nova_compute[240119]: 2026-01-23 19:09:01.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 23 19:09:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Jan 23 19:09:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Jan 23 19:09:01 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Jan 23 19:09:02 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cb4de7c2-b53e-4f25-93d6-250c4ea8ac22", "format": "json"}]: dispatch
Jan 23 19:09:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cb4de7c2-b53e-4f25-93d6-250c4ea8ac22, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:09:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cb4de7c2-b53e-4f25-93d6-250c4ea8ac22, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:09:02 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:09:02.600+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cb4de7c2-b53e-4f25-93d6-250c4ea8ac22' of type subvolume
Jan 23 19:09:02 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cb4de7c2-b53e-4f25-93d6-250c4ea8ac22' of type subvolume
Jan 23 19:09:02 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cb4de7c2-b53e-4f25-93d6-250c4ea8ac22", "force": true, "format": "json"}]: dispatch
Jan 23 19:09:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cb4de7c2-b53e-4f25-93d6-250c4ea8ac22, vol_name:cephfs) < ""
Jan 23 19:09:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cb4de7c2-b53e-4f25-93d6-250c4ea8ac22'' moved to trashcan
Jan 23 19:09:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:09:02 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cb4de7c2-b53e-4f25-93d6-250c4ea8ac22, vol_name:cephfs) < ""
Jan 23 19:09:02 compute-0 ceph-mon[75097]: pgmap v1186: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 86 KiB/s wr, 6 op/s
Jan 23 19:09:02 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "format": "json"}]: dispatch
Jan 23 19:09:02 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b8a58bec-b849-4bc6-8dba-3cc458df53f5", "force": true, "format": "json"}]: dispatch
Jan 23 19:09:02 compute-0 ceph-mon[75097]: osdmap e161: 3 total, 3 up, 3 in
Jan 23 19:09:02 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.rszfwe(active, since 34m)
Jan 23 19:09:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s wr, 2 op/s
Jan 23 19:09:03 compute-0 nova_compute[240119]: 2026-01-23 19:09:03.362 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:09:03 compute-0 nova_compute[240119]: 2026-01-23 19:09:03.363 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:09:03 compute-0 nova_compute[240119]: 2026-01-23 19:09:03.398 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:09:03 compute-0 nova_compute[240119]: 2026-01-23 19:09:03.399 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:09:03 compute-0 nova_compute[240119]: 2026-01-23 19:09:03.399 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:09:03 compute-0 nova_compute[240119]: 2026-01-23 19:09:03.399 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:09:03 compute-0 nova_compute[240119]: 2026-01-23 19:09:03.399 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:09:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:09:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Jan 23 19:09:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Jan 23 19:09:03 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Jan 23 19:09:03 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cb4de7c2-b53e-4f25-93d6-250c4ea8ac22", "format": "json"}]: dispatch
Jan 23 19:09:03 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cb4de7c2-b53e-4f25-93d6-250c4ea8ac22", "force": true, "format": "json"}]: dispatch
Jan 23 19:09:03 compute-0 ceph-mon[75097]: mgrmap e16: compute-0.rszfwe(active, since 34m)
Jan 23 19:09:03 compute-0 ceph-mon[75097]: osdmap e162: 3 total, 3 up, 3 in
Jan 23 19:09:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:09:03 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/81709230' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:09:03 compute-0 nova_compute[240119]: 2026-01-23 19:09:03.971 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:09:04 compute-0 nova_compute[240119]: 2026-01-23 19:09:04.164 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:09:04 compute-0 nova_compute[240119]: 2026-01-23 19:09:04.165 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5042MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:09:04 compute-0 nova_compute[240119]: 2026-01-23 19:09:04.165 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:09:04 compute-0 nova_compute[240119]: 2026-01-23 19:09:04.166 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:09:04 compute-0 nova_compute[240119]: 2026-01-23 19:09:04.232 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:09:04 compute-0 nova_compute[240119]: 2026-01-23 19:09:04.233 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:09:04 compute-0 nova_compute[240119]: 2026-01-23 19:09:04.249 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:09:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 118 KiB/s wr, 9 op/s
Jan 23 19:09:04 compute-0 ceph-mon[75097]: pgmap v1188: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s wr, 2 op/s
Jan 23 19:09:04 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/81709230' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:09:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:09:04 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3574492195' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:09:04 compute-0 nova_compute[240119]: 2026-01-23 19:09:04.821 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:09:04 compute-0 nova_compute[240119]: 2026-01-23 19:09:04.826 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:09:04 compute-0 nova_compute[240119]: 2026-01-23 19:09:04.844 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:09:04 compute-0 nova_compute[240119]: 2026-01-23 19:09:04.846 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:09:04 compute-0 nova_compute[240119]: 2026-01-23 19:09:04.846 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:09:05 compute-0 ceph-mon[75097]: pgmap v1190: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 118 KiB/s wr, 9 op/s
Jan 23 19:09:05 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3574492195' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:09:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 118 KiB/s wr, 9 op/s
Jan 23 19:09:06 compute-0 nova_compute[240119]: 2026-01-23 19:09:06.832 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:09:06 compute-0 nova_compute[240119]: 2026-01-23 19:09:06.833 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:09:06 compute-0 nova_compute[240119]: 2026-01-23 19:09:06.833 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:09:07 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:09:07.018 155616 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:3d:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:eb:2e:86:1a:4d'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 19:09:07 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:09:07.020 155616 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 19:09:08 compute-0 ceph-mon[75097]: pgmap v1191: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 118 KiB/s wr, 9 op/s
Jan 23 19:09:08 compute-0 nova_compute[240119]: 2026-01-23 19:09:08.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:09:08 compute-0 nova_compute[240119]: 2026-01-23 19:09:08.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:09:08 compute-0 nova_compute[240119]: 2026-01-23 19:09:08.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:09:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:09:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Jan 23 19:09:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 69 KiB/s wr, 6 op/s
Jan 23 19:09:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Jan 23 19:09:08 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Jan 23 19:09:09 compute-0 nova_compute[240119]: 2026-01-23 19:09:09.386 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:09:09 compute-0 nova_compute[240119]: 2026-01-23 19:09:09.387 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:09:09 compute-0 nova_compute[240119]: 2026-01-23 19:09:09.387 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:09:09 compute-0 nova_compute[240119]: 2026-01-23 19:09:09.406 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:09:09 compute-0 ceph-mon[75097]: pgmap v1192: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 69 KiB/s wr, 6 op/s
Jan 23 19:09:09 compute-0 ceph-mon[75097]: osdmap e163: 3 total, 3 up, 3 in
Jan 23 19:09:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 82 KiB/s wr, 7 op/s
Jan 23 19:09:11 compute-0 nova_compute[240119]: 2026-01-23 19:09:11.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:09:11 compute-0 ceph-mon[75097]: pgmap v1194: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 82 KiB/s wr, 7 op/s
Jan 23 19:09:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 890 B/s rd, 71 KiB/s wr, 6 op/s
Jan 23 19:09:13 compute-0 podman[252380]: 2026-01-23 19:09:13.047168644 +0000 UTC m=+0.129974824 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Jan 23 19:09:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:09:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:09:13.589 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:09:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:09:13.589 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:09:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:09:13.589 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:09:13 compute-0 ceph-mon[75097]: pgmap v1195: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 890 B/s rd, 71 KiB/s wr, 6 op/s
Jan 23 19:09:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 22 KiB/s wr, 1 op/s
Jan 23 19:09:15 compute-0 ceph-mon[75097]: pgmap v1196: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 22 KiB/s wr, 1 op/s
Jan 23 19:09:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 22 KiB/s wr, 1 op/s
Jan 23 19:09:17 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:09:17.021 155616 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d273cd83-f2f8-4b91-b18f-957051ff2a6c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 19:09:17 compute-0 sudo[252407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:09:17 compute-0 sudo[252407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:09:17 compute-0 sudo[252407]: pam_unix(sudo:session): session closed for user root
Jan 23 19:09:17 compute-0 sudo[252432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 19:09:17 compute-0 sudo[252432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:09:17 compute-0 sudo[252432]: pam_unix(sudo:session): session closed for user root
Jan 23 19:09:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:09:17 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:09:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 19:09:17 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:09:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 19:09:17 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:09:17 compute-0 ceph-mon[75097]: pgmap v1197: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 22 KiB/s wr, 1 op/s
Jan 23 19:09:17 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:09:17 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:09:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 19:09:17 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:09:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 19:09:17 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:09:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:09:17 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:09:18 compute-0 sudo[252488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:09:18 compute-0 sudo[252488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:09:18 compute-0 sudo[252488]: pam_unix(sudo:session): session closed for user root
Jan 23 19:09:18 compute-0 sudo[252513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 19:09:18 compute-0 sudo[252513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:09:18 compute-0 podman[252552]: 2026-01-23 19:09:18.387861619 +0000 UTC m=+0.049146241 container create a1fe7e82ca7cab23a228c8c698a482e02b00d7d6a01357d55f38943385653684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 23 19:09:18 compute-0 systemd[1]: Started libpod-conmon-a1fe7e82ca7cab23a228c8c698a482e02b00d7d6a01357d55f38943385653684.scope.
Jan 23 19:09:18 compute-0 podman[252552]: 2026-01-23 19:09:18.362827877 +0000 UTC m=+0.024112539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:09:18 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:09:18 compute-0 podman[252552]: 2026-01-23 19:09:18.496090842 +0000 UTC m=+0.157375484 container init a1fe7e82ca7cab23a228c8c698a482e02b00d7d6a01357d55f38943385653684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 19:09:18 compute-0 podman[252552]: 2026-01-23 19:09:18.503701368 +0000 UTC m=+0.164985990 container start a1fe7e82ca7cab23a228c8c698a482e02b00d7d6a01357d55f38943385653684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_merkle, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 23 19:09:18 compute-0 nifty_merkle[252569]: 167 167
Jan 23 19:09:18 compute-0 systemd[1]: libpod-a1fe7e82ca7cab23a228c8c698a482e02b00d7d6a01357d55f38943385653684.scope: Deactivated successfully.
Jan 23 19:09:18 compute-0 conmon[252569]: conmon a1fe7e82ca7cab23a228 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a1fe7e82ca7cab23a228c8c698a482e02b00d7d6a01357d55f38943385653684.scope/container/memory.events
Jan 23 19:09:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:09:18 compute-0 podman[252552]: 2026-01-23 19:09:18.578697819 +0000 UTC m=+0.239982471 container attach a1fe7e82ca7cab23a228c8c698a482e02b00d7d6a01357d55f38943385653684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_merkle, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:09:18 compute-0 podman[252552]: 2026-01-23 19:09:18.579314724 +0000 UTC m=+0.240599366 container died a1fe7e82ca7cab23a228c8c698a482e02b00d7d6a01357d55f38943385653684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_merkle, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:09:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-51b6d6a54d7ead8f52591db101eb566dd3c06076f8b4603c3c7f9c8639eb3841-merged.mount: Deactivated successfully.
Jan 23 19:09:18 compute-0 podman[252552]: 2026-01-23 19:09:18.652354378 +0000 UTC m=+0.313639000 container remove a1fe7e82ca7cab23a228c8c698a482e02b00d7d6a01357d55f38943385653684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:09:18 compute-0 systemd[1]: libpod-conmon-a1fe7e82ca7cab23a228c8c698a482e02b00d7d6a01357d55f38943385653684.scope: Deactivated successfully.
Jan 23 19:09:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 22 KiB/s wr, 1 op/s
Jan 23 19:09:18 compute-0 podman[252592]: 2026-01-23 19:09:18.903227824 +0000 UTC m=+0.116507006 container create a7613c35f1d9fae92b549b6f9805fd52c63706e24a143aaf9fdb0be03403b7f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:09:18 compute-0 podman[252592]: 2026-01-23 19:09:18.812048858 +0000 UTC m=+0.025328060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:09:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:09:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:09:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:09:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:09:18 compute-0 systemd[1]: Started libpod-conmon-a7613c35f1d9fae92b549b6f9805fd52c63706e24a143aaf9fdb0be03403b7f5.scope.
Jan 23 19:09:19 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4cfcaf9966ecbb44162793a0c8d7b635d20b2321ebbea5bc6db327d2cb1ffb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4cfcaf9966ecbb44162793a0c8d7b635d20b2321ebbea5bc6db327d2cb1ffb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4cfcaf9966ecbb44162793a0c8d7b635d20b2321ebbea5bc6db327d2cb1ffb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4cfcaf9966ecbb44162793a0c8d7b635d20b2321ebbea5bc6db327d2cb1ffb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4cfcaf9966ecbb44162793a0c8d7b635d20b2321ebbea5bc6db327d2cb1ffb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 19:09:19 compute-0 podman[252592]: 2026-01-23 19:09:19.05170441 +0000 UTC m=+0.264983612 container init a7613c35f1d9fae92b549b6f9805fd52c63706e24a143aaf9fdb0be03403b7f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 23 19:09:19 compute-0 podman[252592]: 2026-01-23 19:09:19.058496136 +0000 UTC m=+0.271775308 container start a7613c35f1d9fae92b549b6f9805fd52c63706e24a143aaf9fdb0be03403b7f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_lamport, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 23 19:09:19 compute-0 podman[252592]: 2026-01-23 19:09:19.08118943 +0000 UTC m=+0.294468652 container attach a7613c35f1d9fae92b549b6f9805fd52c63706e24a143aaf9fdb0be03403b7f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_lamport, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 23 19:09:19 compute-0 distracted_lamport[252609]: --> passed data devices: 0 physical, 3 LVM
Jan 23 19:09:19 compute-0 distracted_lamport[252609]: --> All data devices are unavailable
Jan 23 19:09:19 compute-0 systemd[1]: libpod-a7613c35f1d9fae92b549b6f9805fd52c63706e24a143aaf9fdb0be03403b7f5.scope: Deactivated successfully.
Jan 23 19:09:19 compute-0 podman[252592]: 2026-01-23 19:09:19.542372583 +0000 UTC m=+0.755651765 container died a7613c35f1d9fae92b549b6f9805fd52c63706e24a143aaf9fdb0be03403b7f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 23 19:09:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e4cfcaf9966ecbb44162793a0c8d7b635d20b2321ebbea5bc6db327d2cb1ffb-merged.mount: Deactivated successfully.
Jan 23 19:09:19 compute-0 podman[252592]: 2026-01-23 19:09:19.629543992 +0000 UTC m=+0.842823174 container remove a7613c35f1d9fae92b549b6f9805fd52c63706e24a143aaf9fdb0be03403b7f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_lamport, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:09:19 compute-0 systemd[1]: libpod-conmon-a7613c35f1d9fae92b549b6f9805fd52c63706e24a143aaf9fdb0be03403b7f5.scope: Deactivated successfully.
Jan 23 19:09:19 compute-0 sudo[252513]: pam_unix(sudo:session): session closed for user root
Jan 23 19:09:19 compute-0 sudo[252642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:09:19 compute-0 sudo[252642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:09:19 compute-0 sudo[252642]: pam_unix(sudo:session): session closed for user root
Jan 23 19:09:19 compute-0 sudo[252667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 19:09:19 compute-0 sudo[252667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:09:20 compute-0 ceph-mon[75097]: pgmap v1198: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 22 KiB/s wr, 1 op/s
Jan 23 19:09:20 compute-0 podman[252704]: 2026-01-23 19:09:20.092880797 +0000 UTC m=+0.023590217 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:09:20 compute-0 podman[252704]: 2026-01-23 19:09:20.3448733 +0000 UTC m=+0.275582690 container create 1c2e418e83673077b3a337561e88eab6ab16f35b99ae991097821afe24b757a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:09:20 compute-0 systemd[1]: Started libpod-conmon-1c2e418e83673077b3a337561e88eab6ab16f35b99ae991097821afe24b757a9.scope.
Jan 23 19:09:20 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:09:20 compute-0 podman[252704]: 2026-01-23 19:09:20.505428681 +0000 UTC m=+0.436138101 container init 1c2e418e83673077b3a337561e88eab6ab16f35b99ae991097821afe24b757a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 23 19:09:20 compute-0 podman[252704]: 2026-01-23 19:09:20.512244198 +0000 UTC m=+0.442953588 container start 1c2e418e83673077b3a337561e88eab6ab16f35b99ae991097821afe24b757a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_rubin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 19:09:20 compute-0 musing_rubin[252721]: 167 167
Jan 23 19:09:20 compute-0 systemd[1]: libpod-1c2e418e83673077b3a337561e88eab6ab16f35b99ae991097821afe24b757a9.scope: Deactivated successfully.
Jan 23 19:09:20 compute-0 podman[252704]: 2026-01-23 19:09:20.527010239 +0000 UTC m=+0.457719629 container attach 1c2e418e83673077b3a337561e88eab6ab16f35b99ae991097821afe24b757a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_rubin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:09:20 compute-0 podman[252704]: 2026-01-23 19:09:20.527448329 +0000 UTC m=+0.458157729 container died 1c2e418e83673077b3a337561e88eab6ab16f35b99ae991097821afe24b757a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_rubin, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:09:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-149c1e9c837b65baa5a8a5bff9028082a9d3c2e826fa10cd93f5d1e67f871b04-merged.mount: Deactivated successfully.
Jan 23 19:09:20 compute-0 podman[252704]: 2026-01-23 19:09:20.606868879 +0000 UTC m=+0.537578289 container remove 1c2e418e83673077b3a337561e88eab6ab16f35b99ae991097821afe24b757a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_rubin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:09:20 compute-0 systemd[1]: libpod-conmon-1c2e418e83673077b3a337561e88eab6ab16f35b99ae991097821afe24b757a9.scope: Deactivated successfully.
Jan 23 19:09:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 173 B/s rd, 19 KiB/s wr, 1 op/s
Jan 23 19:09:20 compute-0 podman[252746]: 2026-01-23 19:09:20.784874966 +0000 UTC m=+0.049826678 container create e90eb6ab86553573384ab53c27b970a65770ca3b5b3c5e1879fe005215772952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_shamir, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 23 19:09:20 compute-0 podman[252746]: 2026-01-23 19:09:20.758231325 +0000 UTC m=+0.023183067 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:09:20 compute-0 systemd[1]: Started libpod-conmon-e90eb6ab86553573384ab53c27b970a65770ca3b5b3c5e1879fe005215772952.scope.
Jan 23 19:09:20 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3e963661746f98567d4178a546d2287e74c50aef2702fb944fa419bb1a0ea9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3e963661746f98567d4178a546d2287e74c50aef2702fb944fa419bb1a0ea9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3e963661746f98567d4178a546d2287e74c50aef2702fb944fa419bb1a0ea9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3e963661746f98567d4178a546d2287e74c50aef2702fb944fa419bb1a0ea9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:09:20 compute-0 podman[252746]: 2026-01-23 19:09:20.942689339 +0000 UTC m=+0.207641071 container init e90eb6ab86553573384ab53c27b970a65770ca3b5b3c5e1879fe005215772952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 19:09:20 compute-0 podman[252746]: 2026-01-23 19:09:20.950674455 +0000 UTC m=+0.215626167 container start e90eb6ab86553573384ab53c27b970a65770ca3b5b3c5e1879fe005215772952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 19:09:20 compute-0 podman[252746]: 2026-01-23 19:09:20.976592818 +0000 UTC m=+0.241544560 container attach e90eb6ab86553573384ab53c27b970a65770ca3b5b3c5e1879fe005215772952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_shamir, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 19:09:21 compute-0 funny_shamir[252763]: {
Jan 23 19:09:21 compute-0 funny_shamir[252763]:     "0": [
Jan 23 19:09:21 compute-0 funny_shamir[252763]:         {
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "devices": [
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "/dev/loop3"
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             ],
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "lv_name": "ceph_lv0",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "lv_size": "21470642176",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "name": "ceph_lv0",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "tags": {
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.cluster_name": "ceph",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.crush_device_class": "",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.encrypted": "0",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.objectstore": "bluestore",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.osd_id": "0",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.type": "block",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.vdo": "0",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.with_tpm": "0"
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             },
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "type": "block",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "vg_name": "ceph_vg0"
Jan 23 19:09:21 compute-0 funny_shamir[252763]:         }
Jan 23 19:09:21 compute-0 funny_shamir[252763]:     ],
Jan 23 19:09:21 compute-0 funny_shamir[252763]:     "1": [
Jan 23 19:09:21 compute-0 funny_shamir[252763]:         {
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "devices": [
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "/dev/loop4"
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             ],
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "lv_name": "ceph_lv1",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "lv_size": "21470642176",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "name": "ceph_lv1",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "tags": {
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.cluster_name": "ceph",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.crush_device_class": "",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.encrypted": "0",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.objectstore": "bluestore",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.osd_id": "1",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.type": "block",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.vdo": "0",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.with_tpm": "0"
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             },
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "type": "block",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "vg_name": "ceph_vg1"
Jan 23 19:09:21 compute-0 funny_shamir[252763]:         }
Jan 23 19:09:21 compute-0 funny_shamir[252763]:     ],
Jan 23 19:09:21 compute-0 funny_shamir[252763]:     "2": [
Jan 23 19:09:21 compute-0 funny_shamir[252763]:         {
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "devices": [
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "/dev/loop5"
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             ],
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "lv_name": "ceph_lv2",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "lv_size": "21470642176",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "name": "ceph_lv2",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "tags": {
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.cluster_name": "ceph",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.crush_device_class": "",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.encrypted": "0",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.objectstore": "bluestore",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.osd_id": "2",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.type": "block",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.vdo": "0",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:                 "ceph.with_tpm": "0"
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             },
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "type": "block",
Jan 23 19:09:21 compute-0 funny_shamir[252763]:             "vg_name": "ceph_vg2"
Jan 23 19:09:21 compute-0 funny_shamir[252763]:         }
Jan 23 19:09:21 compute-0 funny_shamir[252763]:     ]
Jan 23 19:09:21 compute-0 funny_shamir[252763]: }
Jan 23 19:09:21 compute-0 systemd[1]: libpod-e90eb6ab86553573384ab53c27b970a65770ca3b5b3c5e1879fe005215772952.scope: Deactivated successfully.
Jan 23 19:09:21 compute-0 podman[252746]: 2026-01-23 19:09:21.262211253 +0000 UTC m=+0.527162975 container died e90eb6ab86553573384ab53c27b970a65770ca3b5b3c5e1879fe005215772952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_shamir, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:09:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb3e963661746f98567d4178a546d2287e74c50aef2702fb944fa419bb1a0ea9-merged.mount: Deactivated successfully.
Jan 23 19:09:21 compute-0 podman[252746]: 2026-01-23 19:09:21.342976165 +0000 UTC m=+0.607927877 container remove e90eb6ab86553573384ab53c27b970a65770ca3b5b3c5e1879fe005215772952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 19:09:21 compute-0 systemd[1]: libpod-conmon-e90eb6ab86553573384ab53c27b970a65770ca3b5b3c5e1879fe005215772952.scope: Deactivated successfully.
Jan 23 19:09:21 compute-0 podman[252772]: 2026-01-23 19:09:21.387439451 +0000 UTC m=+0.097649526 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 19:09:21 compute-0 sudo[252667]: pam_unix(sudo:session): session closed for user root
Jan 23 19:09:21 compute-0 sudo[252802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:09:21 compute-0 sudo[252802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:09:21 compute-0 sudo[252802]: pam_unix(sudo:session): session closed for user root
Jan 23 19:09:21 compute-0 sudo[252827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 19:09:21 compute-0 sudo[252827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:09:21 compute-0 podman[252864]: 2026-01-23 19:09:21.832592702 +0000 UTC m=+0.043566975 container create 4166dfe561e4dfd148b2dffdee03c8a06c7d034e0a24fdc1c25404c08f585713 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ardinghelli, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:09:21 compute-0 systemd[1]: Started libpod-conmon-4166dfe561e4dfd148b2dffdee03c8a06c7d034e0a24fdc1c25404c08f585713.scope.
Jan 23 19:09:21 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:09:21 compute-0 podman[252864]: 2026-01-23 19:09:21.813013274 +0000 UTC m=+0.023987547 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:09:21 compute-0 podman[252864]: 2026-01-23 19:09:21.921981775 +0000 UTC m=+0.132956088 container init 4166dfe561e4dfd148b2dffdee03c8a06c7d034e0a24fdc1c25404c08f585713 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 19:09:21 compute-0 podman[252864]: 2026-01-23 19:09:21.930077463 +0000 UTC m=+0.141051736 container start 4166dfe561e4dfd148b2dffdee03c8a06c7d034e0a24fdc1c25404c08f585713 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 19:09:21 compute-0 bold_ardinghelli[252880]: 167 167
Jan 23 19:09:21 compute-0 systemd[1]: libpod-4166dfe561e4dfd148b2dffdee03c8a06c7d034e0a24fdc1c25404c08f585713.scope: Deactivated successfully.
Jan 23 19:09:21 compute-0 podman[252864]: 2026-01-23 19:09:21.940504017 +0000 UTC m=+0.151478300 container attach 4166dfe561e4dfd148b2dffdee03c8a06c7d034e0a24fdc1c25404c08f585713 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ardinghelli, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:09:21 compute-0 podman[252864]: 2026-01-23 19:09:21.941518552 +0000 UTC m=+0.152492845 container died 4166dfe561e4dfd148b2dffdee03c8a06c7d034e0a24fdc1c25404c08f585713 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ardinghelli, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 23 19:09:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dc7c080e103c814e962deaec330957d11b04c8c5c65da1c91c28f2c8b871eed-merged.mount: Deactivated successfully.
Jan 23 19:09:22 compute-0 podman[252864]: 2026-01-23 19:09:22.037586578 +0000 UTC m=+0.248560851 container remove 4166dfe561e4dfd148b2dffdee03c8a06c7d034e0a24fdc1c25404c08f585713 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ardinghelli, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:09:22 compute-0 systemd[1]: libpod-conmon-4166dfe561e4dfd148b2dffdee03c8a06c7d034e0a24fdc1c25404c08f585713.scope: Deactivated successfully.
Jan 23 19:09:22 compute-0 ceph-mon[75097]: pgmap v1199: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 173 B/s rd, 19 KiB/s wr, 1 op/s
Jan 23 19:09:22 compute-0 podman[252906]: 2026-01-23 19:09:22.210984992 +0000 UTC m=+0.053019125 container create 7860e3045bb64d06263388d30f64baa9755b0a3f986eb64fd504e0f9808ede03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_goldstine, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 23 19:09:22 compute-0 systemd[1]: Started libpod-conmon-7860e3045bb64d06263388d30f64baa9755b0a3f986eb64fd504e0f9808ede03.scope.
Jan 23 19:09:22 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:09:22 compute-0 podman[252906]: 2026-01-23 19:09:22.187303164 +0000 UTC m=+0.029337317 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8bbf327cdb8de8bdb85f1d4c967f54c23090a9fb1e141006a022217c386d4da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8bbf327cdb8de8bdb85f1d4c967f54c23090a9fb1e141006a022217c386d4da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8bbf327cdb8de8bdb85f1d4c967f54c23090a9fb1e141006a022217c386d4da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8bbf327cdb8de8bdb85f1d4c967f54c23090a9fb1e141006a022217c386d4da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:09:22 compute-0 podman[252906]: 2026-01-23 19:09:22.304204019 +0000 UTC m=+0.146238162 container init 7860e3045bb64d06263388d30f64baa9755b0a3f986eb64fd504e0f9808ede03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:09:22 compute-0 podman[252906]: 2026-01-23 19:09:22.314407118 +0000 UTC m=+0.156441251 container start 7860e3045bb64d06263388d30f64baa9755b0a3f986eb64fd504e0f9808ede03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 19:09:22 compute-0 podman[252906]: 2026-01-23 19:09:22.318900418 +0000 UTC m=+0.160934571 container attach 7860e3045bb64d06263388d30f64baa9755b0a3f986eb64fd504e0f9808ede03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_goldstine, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 19:09:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 0 op/s
Jan 23 19:09:23 compute-0 lvm[253001]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:09:23 compute-0 lvm[253004]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:09:23 compute-0 lvm[253003]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:09:23 compute-0 lvm[253004]: VG ceph_vg2 finished
Jan 23 19:09:23 compute-0 lvm[253003]: VG ceph_vg1 finished
Jan 23 19:09:23 compute-0 lvm[253001]: VG ceph_vg0 finished
Jan 23 19:09:23 compute-0 silly_goldstine[252923]: {}
Jan 23 19:09:23 compute-0 systemd[1]: libpod-7860e3045bb64d06263388d30f64baa9755b0a3f986eb64fd504e0f9808ede03.scope: Deactivated successfully.
Jan 23 19:09:23 compute-0 systemd[1]: libpod-7860e3045bb64d06263388d30f64baa9755b0a3f986eb64fd504e0f9808ede03.scope: Consumed 1.359s CPU time.
Jan 23 19:09:23 compute-0 podman[253007]: 2026-01-23 19:09:23.14369293 +0000 UTC m=+0.025075723 container died 7860e3045bb64d06263388d30f64baa9755b0a3f986eb64fd504e0f9808ede03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_goldstine, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 23 19:09:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8bbf327cdb8de8bdb85f1d4c967f54c23090a9fb1e141006a022217c386d4da-merged.mount: Deactivated successfully.
Jan 23 19:09:23 compute-0 podman[253007]: 2026-01-23 19:09:23.333204248 +0000 UTC m=+0.214587021 container remove 7860e3045bb64d06263388d30f64baa9755b0a3f986eb64fd504e0f9808ede03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_goldstine, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 19:09:23 compute-0 systemd[1]: libpod-conmon-7860e3045bb64d06263388d30f64baa9755b0a3f986eb64fd504e0f9808ede03.scope: Deactivated successfully.
Jan 23 19:09:23 compute-0 sudo[252827]: pam_unix(sudo:session): session closed for user root
Jan 23 19:09:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:09:23 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:09:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:09:23 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:09:23 compute-0 sudo[253022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 19:09:23 compute-0 sudo[253022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:09:23 compute-0 sudo[253022]: pam_unix(sudo:session): session closed for user root
Jan 23 19:09:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:09:24 compute-0 ceph-mon[75097]: pgmap v1200: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 0 op/s
Jan 23 19:09:24 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:09:24 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:09:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 0 op/s
Jan 23 19:09:26 compute-0 ceph-mon[75097]: pgmap v1201: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 0 op/s
Jan 23 19:09:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:09:28 compute-0 ceph-mon[75097]: pgmap v1202: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:09:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:09:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:09:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:09:28
Jan 23 19:09:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:09:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:09:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['default.rgw.control', 'volumes', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'backups', 'images']
Jan 23 19:09:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:09:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:09:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:09:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:09:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:09:30 compute-0 ceph-mon[75097]: pgmap v1203: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:09:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:09:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:09:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:09:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:09:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:09:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:09:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:09:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:09:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:09:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:09:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:09:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:09:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:09:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:09:32 compute-0 ceph-mon[75097]: pgmap v1204: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:09:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:09:33 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aa2c8728-a923-4fbb-967a-08864f87ea61", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:09:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa2c8728-a923-4fbb-967a-08864f87ea61, vol_name:cephfs) < ""
Jan 23 19:09:33 compute-0 ceph-mgr[75390]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/aa2c8728-a923-4fbb-967a-08864f87ea61/b630a642-2f75-48cc-b83e-f84ce4a3c572'.
Jan 23 19:09:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa2c8728-a923-4fbb-967a-08864f87ea61/.meta.tmp'
Jan 23 19:09:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa2c8728-a923-4fbb-967a-08864f87ea61/.meta.tmp' to config b'/volumes/_nogroup/aa2c8728-a923-4fbb-967a-08864f87ea61/.meta'
Jan 23 19:09:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa2c8728-a923-4fbb-967a-08864f87ea61, vol_name:cephfs) < ""
Jan 23 19:09:33 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aa2c8728-a923-4fbb-967a-08864f87ea61", "format": "json"}]: dispatch
Jan 23 19:09:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa2c8728-a923-4fbb-967a-08864f87ea61, vol_name:cephfs) < ""
Jan 23 19:09:33 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa2c8728-a923-4fbb-967a-08864f87ea61, vol_name:cephfs) < ""
Jan 23 19:09:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 19:09:33 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:09:34 compute-0 ceph-mon[75097]: pgmap v1205: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:09:34 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/1987967203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 23 19:09:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s wr, 0 op/s
Jan 23 19:09:35 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aa2c8728-a923-4fbb-967a-08864f87ea61", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 23 19:09:35 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aa2c8728-a923-4fbb-967a-08864f87ea61", "format": "json"}]: dispatch
Jan 23 19:09:36 compute-0 ceph-mon[75097]: pgmap v1206: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s wr, 0 op/s
Jan 23 19:09:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s wr, 0 op/s
Jan 23 19:09:37 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "aa2c8728-a923-4fbb-967a-08864f87ea61", "snap_name": "b02922a0-ce94-4a9d-86c4-763307de222d", "format": "json"}]: dispatch
Jan 23 19:09:37 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b02922a0-ce94-4a9d-86c4-763307de222d, sub_name:aa2c8728-a923-4fbb-967a-08864f87ea61, vol_name:cephfs) < ""
Jan 23 19:09:37 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b02922a0-ce94-4a9d-86c4-763307de222d, sub_name:aa2c8728-a923-4fbb-967a-08864f87ea61, vol_name:cephfs) < ""
Jan 23 19:09:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:09:38 compute-0 ceph-mon[75097]: pgmap v1207: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s wr, 0 op/s
Jan 23 19:09:38 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "aa2c8728-a923-4fbb-967a-08864f87ea61", "snap_name": "b02922a0-ce94-4a9d-86c4-763307de222d", "format": "json"}]: dispatch
Jan 23 19:09:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s wr, 0 op/s
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659052183659598 of space, bias 1.0, pg target 0.19977156550978795 quantized to 32 (current 32)
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005053589879173998 of space, bias 4.0, pg target 0.6064307855008798 quantized to 16 (current 16)
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:09:40 compute-0 ceph-mon[75097]: pgmap v1208: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s wr, 0 op/s
Jan 23 19:09:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 1 op/s
Jan 23 19:09:41 compute-0 ceph-mon[75097]: pgmap v1209: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 1 op/s
Jan 23 19:09:42 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aa2c8728-a923-4fbb-967a-08864f87ea61", "snap_name": "b02922a0-ce94-4a9d-86c4-763307de222d_a1ee0ee3-99a7-4e0f-8d21-99cc122f6a44", "force": true, "format": "json"}]: dispatch
Jan 23 19:09:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b02922a0-ce94-4a9d-86c4-763307de222d_a1ee0ee3-99a7-4e0f-8d21-99cc122f6a44, sub_name:aa2c8728-a923-4fbb-967a-08864f87ea61, vol_name:cephfs) < ""
Jan 23 19:09:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 1 op/s
Jan 23 19:09:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa2c8728-a923-4fbb-967a-08864f87ea61/.meta.tmp'
Jan 23 19:09:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa2c8728-a923-4fbb-967a-08864f87ea61/.meta.tmp' to config b'/volumes/_nogroup/aa2c8728-a923-4fbb-967a-08864f87ea61/.meta'
Jan 23 19:09:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b02922a0-ce94-4a9d-86c4-763307de222d_a1ee0ee3-99a7-4e0f-8d21-99cc122f6a44, sub_name:aa2c8728-a923-4fbb-967a-08864f87ea61, vol_name:cephfs) < ""
Jan 23 19:09:42 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aa2c8728-a923-4fbb-967a-08864f87ea61", "snap_name": "b02922a0-ce94-4a9d-86c4-763307de222d", "force": true, "format": "json"}]: dispatch
Jan 23 19:09:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b02922a0-ce94-4a9d-86c4-763307de222d, sub_name:aa2c8728-a923-4fbb-967a-08864f87ea61, vol_name:cephfs) < ""
Jan 23 19:09:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa2c8728-a923-4fbb-967a-08864f87ea61/.meta.tmp'
Jan 23 19:09:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa2c8728-a923-4fbb-967a-08864f87ea61/.meta.tmp' to config b'/volumes/_nogroup/aa2c8728-a923-4fbb-967a-08864f87ea61/.meta'
Jan 23 19:09:42 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aa2c8728-a923-4fbb-967a-08864f87ea61", "snap_name": "b02922a0-ce94-4a9d-86c4-763307de222d_a1ee0ee3-99a7-4e0f-8d21-99cc122f6a44", "force": true, "format": "json"}]: dispatch
Jan 23 19:09:42 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b02922a0-ce94-4a9d-86c4-763307de222d, sub_name:aa2c8728-a923-4fbb-967a-08864f87ea61, vol_name:cephfs) < ""
Jan 23 19:09:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:09:43 compute-0 ceph-mon[75097]: pgmap v1210: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 1 op/s
Jan 23 19:09:43 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aa2c8728-a923-4fbb-967a-08864f87ea61", "snap_name": "b02922a0-ce94-4a9d-86c4-763307de222d", "force": true, "format": "json"}]: dispatch
Jan 23 19:09:44 compute-0 podman[253047]: 2026-01-23 19:09:44.013290075 +0000 UTC m=+0.091971947 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 23 19:09:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s wr, 2 op/s
Jan 23 19:09:46 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aa2c8728-a923-4fbb-967a-08864f87ea61", "format": "json"}]: dispatch
Jan 23 19:09:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:aa2c8728-a923-4fbb-967a-08864f87ea61, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:09:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:aa2c8728-a923-4fbb-967a-08864f87ea61, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 23 19:09:46 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:09:46.216+0000 7f06a7d20640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa2c8728-a923-4fbb-967a-08864f87ea61' of type subvolume
Jan 23 19:09:46 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa2c8728-a923-4fbb-967a-08864f87ea61' of type subvolume
Jan 23 19:09:46 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aa2c8728-a923-4fbb-967a-08864f87ea61", "force": true, "format": "json"}]: dispatch
Jan 23 19:09:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa2c8728-a923-4fbb-967a-08864f87ea61, vol_name:cephfs) < ""
Jan 23 19:09:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/aa2c8728-a923-4fbb-967a-08864f87ea61'' moved to trashcan
Jan 23 19:09:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 19:09:46 compute-0 ceph-mgr[75390]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa2c8728-a923-4fbb-967a-08864f87ea61, vol_name:cephfs) < ""
Jan 23 19:09:46 compute-0 ceph-mon[75097]: pgmap v1211: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s wr, 2 op/s
Jan 23 19:09:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s wr, 2 op/s
Jan 23 19:09:47 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aa2c8728-a923-4fbb-967a-08864f87ea61", "format": "json"}]: dispatch
Jan 23 19:09:47 compute-0 ceph-mon[75097]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aa2c8728-a923-4fbb-967a-08864f87ea61", "force": true, "format": "json"}]: dispatch
Jan 23 19:09:48 compute-0 ceph-mon[75097]: pgmap v1212: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s wr, 2 op/s
Jan 23 19:09:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:09:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s wr, 2 op/s
Jan 23 19:09:50 compute-0 ceph-mon[75097]: pgmap v1213: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s wr, 2 op/s
Jan 23 19:09:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 52 KiB/s wr, 4 op/s
Jan 23 19:09:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Jan 23 19:09:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Jan 23 19:09:51 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Jan 23 19:09:51 compute-0 podman[253073]: 2026-01-23 19:09:51.964801319 +0000 UTC m=+0.052145335 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 23 19:09:52 compute-0 ceph-mon[75097]: pgmap v1214: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 52 KiB/s wr, 4 op/s
Jan 23 19:09:52 compute-0 ceph-mon[75097]: osdmap e164: 3 total, 3 up, 3 in
Jan 23 19:09:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 52 KiB/s wr, 4 op/s
Jan 23 19:09:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:09:54 compute-0 ceph-mon[75097]: pgmap v1216: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 52 KiB/s wr, 4 op/s
Jan 23 19:09:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 34 KiB/s wr, 3 op/s
Jan 23 19:09:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 34 KiB/s wr, 3 op/s
Jan 23 19:09:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:09:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2072105608' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:09:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:09:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2072105608' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:09:58 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:09:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 34 KiB/s wr, 3 op/s
Jan 23 19:10:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:10:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:10:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:10:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:10:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 14 KiB/s wr, 0 op/s
Jan 23 19:10:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:10:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:10:01 compute-0 nova_compute[240119]: 2026-01-23 19:10:01.343 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:10:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 91 B/s rd, 13 KiB/s wr, 0 op/s
Jan 23 19:10:03 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:10:03 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.572957993s, txc = 0x55944b372c00, txc bytes = 2273, txc ios = 1, txc cost = 672273, txc onodes = 1, DB updates = 7, DB bytes = 6277, cost max = 95489052 on 2026-01-23T18:35:28.353611+0000, txc max = 104 on 2026-01-23T19:04:00.405895+0000
Jan 23 19:10:03 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 7.572911263s
Jan 23 19:10:03 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 7.572911263s
Jan 23 19:10:03 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.627645016s, txc = 0x55944b2c1b00, txc bytes = 25069, txc ios = 1, txc cost = 695069, txc onodes = 1, DB updates = 7, DB bytes = 30114, cost max = 95489052 on 2026-01-23T18:35:28.353611+0000, txc max = 104 on 2026-01-23T19:04:00.405895+0000
Jan 23 19:10:03 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.627498150s, txc = 0x55944c30c300, txc bytes = 40207, txc ios = 1, txc cost = 710207, txc onodes = 1, DB updates = 7, DB bytes = 42454, cost max = 95489052 on 2026-01-23T18:35:28.353611+0000, txc max = 104 on 2026-01-23T19:04:00.405895+0000
Jan 23 19:10:03 compute-0 ceph-mon[75097]: pgmap v1217: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 34 KiB/s wr, 3 op/s
Jan 23 19:10:04 compute-0 nova_compute[240119]: 2026-01-23 19:10:04.347 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:10:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 21 KiB/s wr, 1 op/s
Jan 23 19:10:04 compute-0 ceph-mon[75097]: pgmap v1218: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 34 KiB/s wr, 3 op/s
Jan 23 19:10:04 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/2072105608' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:10:04 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/2072105608' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:10:04 compute-0 ceph-mon[75097]: pgmap v1219: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 34 KiB/s wr, 3 op/s
Jan 23 19:10:04 compute-0 ceph-mon[75097]: pgmap v1220: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 14 KiB/s wr, 0 op/s
Jan 23 19:10:04 compute-0 ceph-mon[75097]: pgmap v1221: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 91 B/s rd, 13 KiB/s wr, 0 op/s
Jan 23 19:10:05 compute-0 nova_compute[240119]: 2026-01-23 19:10:05.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:10:05 compute-0 nova_compute[240119]: 2026-01-23 19:10:05.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:10:05 compute-0 nova_compute[240119]: 2026-01-23 19:10:05.372 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:10:05 compute-0 nova_compute[240119]: 2026-01-23 19:10:05.373 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:10:05 compute-0 nova_compute[240119]: 2026-01-23 19:10:05.373 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:10:05 compute-0 nova_compute[240119]: 2026-01-23 19:10:05.373 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:10:05 compute-0 nova_compute[240119]: 2026-01-23 19:10:05.373 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:10:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:10:05 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2134323971' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:10:05 compute-0 nova_compute[240119]: 2026-01-23 19:10:05.973 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:10:06 compute-0 nova_compute[240119]: 2026-01-23 19:10:06.174 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:10:06 compute-0 nova_compute[240119]: 2026-01-23 19:10:06.176 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5029MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:10:06 compute-0 nova_compute[240119]: 2026-01-23 19:10:06.176 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:10:06 compute-0 nova_compute[240119]: 2026-01-23 19:10:06.177 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:10:06 compute-0 ceph-mon[75097]: pgmap v1222: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 21 KiB/s wr, 1 op/s
Jan 23 19:10:06 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2134323971' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:10:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s wr, 0 op/s
Jan 23 19:10:07 compute-0 nova_compute[240119]: 2026-01-23 19:10:07.705 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:10:07 compute-0 nova_compute[240119]: 2026-01-23 19:10:07.706 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:10:07 compute-0 nova_compute[240119]: 2026-01-23 19:10:07.771 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Refreshing inventories for resource provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 19:10:07 compute-0 nova_compute[240119]: 2026-01-23 19:10:07.839 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Updating ProviderTree inventory for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 19:10:07 compute-0 nova_compute[240119]: 2026-01-23 19:10:07.840 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Updating inventory in ProviderTree for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 19:10:07 compute-0 nova_compute[240119]: 2026-01-23 19:10:07.862 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Refreshing aggregate associations for resource provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 19:10:07 compute-0 nova_compute[240119]: 2026-01-23 19:10:07.883 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Refreshing trait associations for resource provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE42,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 19:10:07 compute-0 nova_compute[240119]: 2026-01-23 19:10:07.899 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:10:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:10:08 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3184240303' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:10:08 compute-0 nova_compute[240119]: 2026-01-23 19:10:08.484 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:10:08 compute-0 nova_compute[240119]: 2026-01-23 19:10:08.491 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:10:08 compute-0 nova_compute[240119]: 2026-01-23 19:10:08.522 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:10:08 compute-0 nova_compute[240119]: 2026-01-23 19:10:08.523 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:10:08 compute-0 nova_compute[240119]: 2026-01-23 19:10:08.523 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.347s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:10:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:10:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Jan 23 19:10:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Jan 23 19:10:08 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Jan 23 19:10:08 compute-0 ceph-mon[75097]: pgmap v1223: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s wr, 0 op/s
Jan 23 19:10:08 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3184240303' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:10:08 compute-0 ceph-mon[75097]: osdmap e165: 3 total, 3 up, 3 in
Jan 23 19:10:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Jan 23 19:10:09 compute-0 nova_compute[240119]: 2026-01-23 19:10:09.523 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:10:09 compute-0 nova_compute[240119]: 2026-01-23 19:10:09.524 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:10:09 compute-0 nova_compute[240119]: 2026-01-23 19:10:09.524 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:10:09 compute-0 nova_compute[240119]: 2026-01-23 19:10:09.524 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:10:10 compute-0 ceph-mon[75097]: pgmap v1225: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Jan 23 19:10:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Jan 23 19:10:11 compute-0 nova_compute[240119]: 2026-01-23 19:10:11.344 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:10:11 compute-0 nova_compute[240119]: 2026-01-23 19:10:11.408 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:10:11 compute-0 nova_compute[240119]: 2026-01-23 19:10:11.408 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:10:11 compute-0 nova_compute[240119]: 2026-01-23 19:10:11.408 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:10:11 compute-0 nova_compute[240119]: 2026-01-23 19:10:11.441 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:10:11 compute-0 ceph-mon[75097]: log_channel(cluster) log [WRN] : Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Jan 23 19:10:11 compute-0 ceph-mon[75097]: pgmap v1226: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Jan 23 19:10:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Jan 23 19:10:12 compute-0 ceph-mon[75097]: Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Jan 23 19:10:13 compute-0 nova_compute[240119]: 2026-01-23 19:10:13.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:10:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:10:13.591 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:10:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:10:13.592 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:10:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:10:13.592 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:10:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:10:14 compute-0 ceph-mon[75097]: pgmap v1227: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Jan 23 19:10:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:15 compute-0 podman[253137]: 2026-01-23 19:10:15.046397242 +0000 UTC m=+0.122895212 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Jan 23 19:10:16 compute-0 ceph-mon[75097]: pgmap v1228: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:18 compute-0 ceph-mon[75097]: pgmap v1229: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:10:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:20 compute-0 ceph-mon[75097]: pgmap v1230: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:22 compute-0 ceph-mon[75097]: pgmap v1231: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:22 compute-0 podman[253162]: 2026-01-23 19:10:22.972356681 +0000 UTC m=+0.054551993 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 23 19:10:23 compute-0 sudo[253182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:10:23 compute-0 sudo[253182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:10:23 compute-0 sudo[253182]: pam_unix(sudo:session): session closed for user root
Jan 23 19:10:23 compute-0 sudo[253207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 19:10:23 compute-0 sudo[253207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:10:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:10:24 compute-0 ceph-mon[75097]: pgmap v1232: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:24 compute-0 sudo[253207]: pam_unix(sudo:session): session closed for user root
Jan 23 19:10:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:10:24 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:10:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 19:10:24 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:10:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 19:10:24 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:10:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 19:10:24 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:10:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 19:10:24 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:10:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:10:24 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:10:24 compute-0 sudo[253263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:10:24 compute-0 sudo[253263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:10:24 compute-0 sudo[253263]: pam_unix(sudo:session): session closed for user root
Jan 23 19:10:24 compute-0 sudo[253288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 19:10:24 compute-0 sudo[253288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:10:24 compute-0 podman[253326]: 2026-01-23 19:10:24.599657011 +0000 UTC m=+0.061592035 container create 9c5075b83e61bb5812c925c672c47b944500d79a99b69ac4f7e0aebf504bac4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mclaren, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:10:24 compute-0 systemd[1]: Started libpod-conmon-9c5075b83e61bb5812c925c672c47b944500d79a99b69ac4f7e0aebf504bac4e.scope.
Jan 23 19:10:24 compute-0 podman[253326]: 2026-01-23 19:10:24.561126931 +0000 UTC m=+0.023061985 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:10:24 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:10:24 compute-0 podman[253326]: 2026-01-23 19:10:24.709888903 +0000 UTC m=+0.171823967 container init 9c5075b83e61bb5812c925c672c47b944500d79a99b69ac4f7e0aebf504bac4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mclaren, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:10:24 compute-0 podman[253326]: 2026-01-23 19:10:24.716611948 +0000 UTC m=+0.178546982 container start 9c5075b83e61bb5812c925c672c47b944500d79a99b69ac4f7e0aebf504bac4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mclaren, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:10:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:24 compute-0 distracted_mclaren[253343]: 167 167
Jan 23 19:10:24 compute-0 podman[253326]: 2026-01-23 19:10:24.722908531 +0000 UTC m=+0.184843595 container attach 9c5075b83e61bb5812c925c672c47b944500d79a99b69ac4f7e0aebf504bac4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 19:10:24 compute-0 systemd[1]: libpod-9c5075b83e61bb5812c925c672c47b944500d79a99b69ac4f7e0aebf504bac4e.scope: Deactivated successfully.
Jan 23 19:10:24 compute-0 podman[253326]: 2026-01-23 19:10:24.723443005 +0000 UTC m=+0.185378059 container died 9c5075b83e61bb5812c925c672c47b944500d79a99b69ac4f7e0aebf504bac4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 19:10:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cd0803b403642299d14b00518d04ed9d029209fb20aa0039dd44340c6fc8565-merged.mount: Deactivated successfully.
Jan 23 19:10:24 compute-0 podman[253326]: 2026-01-23 19:10:24.803793856 +0000 UTC m=+0.265728890 container remove 9c5075b83e61bb5812c925c672c47b944500d79a99b69ac4f7e0aebf504bac4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mclaren, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 19:10:24 compute-0 systemd[1]: libpod-conmon-9c5075b83e61bb5812c925c672c47b944500d79a99b69ac4f7e0aebf504bac4e.scope: Deactivated successfully.
Jan 23 19:10:24 compute-0 podman[253367]: 2026-01-23 19:10:24.963513147 +0000 UTC m=+0.043738119 container create 5f6b617f9bc95606a2fd39796c6a2721c88dc023970638fcc7cbfd834a4bba09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_johnson, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 23 19:10:25 compute-0 podman[253367]: 2026-01-23 19:10:24.943812526 +0000 UTC m=+0.024037518 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:10:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:10:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:10:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:10:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:10:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:10:25 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:10:25 compute-0 systemd[1]: Started libpod-conmon-5f6b617f9bc95606a2fd39796c6a2721c88dc023970638fcc7cbfd834a4bba09.scope.
Jan 23 19:10:25 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e520cd442a72d23afa239a4744e2951172a8018739d17087fcfddfcd0f910eb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e520cd442a72d23afa239a4744e2951172a8018739d17087fcfddfcd0f910eb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e520cd442a72d23afa239a4744e2951172a8018739d17087fcfddfcd0f910eb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e520cd442a72d23afa239a4744e2951172a8018739d17087fcfddfcd0f910eb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e520cd442a72d23afa239a4744e2951172a8018739d17087fcfddfcd0f910eb4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 19:10:25 compute-0 podman[253367]: 2026-01-23 19:10:25.226977171 +0000 UTC m=+0.307202173 container init 5f6b617f9bc95606a2fd39796c6a2721c88dc023970638fcc7cbfd834a4bba09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_johnson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:10:25 compute-0 podman[253367]: 2026-01-23 19:10:25.235216202 +0000 UTC m=+0.315441174 container start 5f6b617f9bc95606a2fd39796c6a2721c88dc023970638fcc7cbfd834a4bba09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_johnson, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 19:10:25 compute-0 podman[253367]: 2026-01-23 19:10:25.238550133 +0000 UTC m=+0.318775125 container attach 5f6b617f9bc95606a2fd39796c6a2721c88dc023970638fcc7cbfd834a4bba09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_johnson, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:10:25 compute-0 musing_johnson[253384]: --> passed data devices: 0 physical, 3 LVM
Jan 23 19:10:25 compute-0 musing_johnson[253384]: --> All data devices are unavailable
Jan 23 19:10:25 compute-0 systemd[1]: libpod-5f6b617f9bc95606a2fd39796c6a2721c88dc023970638fcc7cbfd834a4bba09.scope: Deactivated successfully.
Jan 23 19:10:25 compute-0 podman[253367]: 2026-01-23 19:10:25.710890649 +0000 UTC m=+0.791115621 container died 5f6b617f9bc95606a2fd39796c6a2721c88dc023970638fcc7cbfd834a4bba09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_johnson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:10:26 compute-0 ceph-mon[75097]: pgmap v1233: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e520cd442a72d23afa239a4744e2951172a8018739d17087fcfddfcd0f910eb4-merged.mount: Deactivated successfully.
Jan 23 19:10:26 compute-0 podman[253367]: 2026-01-23 19:10:26.389825749 +0000 UTC m=+1.470050721 container remove 5f6b617f9bc95606a2fd39796c6a2721c88dc023970638fcc7cbfd834a4bba09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_johnson, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 23 19:10:26 compute-0 sudo[253288]: pam_unix(sudo:session): session closed for user root
Jan 23 19:10:26 compute-0 systemd[1]: libpod-conmon-5f6b617f9bc95606a2fd39796c6a2721c88dc023970638fcc7cbfd834a4bba09.scope: Deactivated successfully.
Jan 23 19:10:26 compute-0 sudo[253416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:10:26 compute-0 sudo[253416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:10:26 compute-0 sudo[253416]: pam_unix(sudo:session): session closed for user root
Jan 23 19:10:26 compute-0 sudo[253441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 19:10:26 compute-0 sudo[253441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:10:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:26 compute-0 podman[253478]: 2026-01-23 19:10:26.827787415 +0000 UTC m=+0.022778898 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:10:26 compute-0 podman[253478]: 2026-01-23 19:10:26.981670272 +0000 UTC m=+0.176661735 container create 416e55ac686a394657a2552af8519f527e5c00c1b49e69badf8ab551fa450746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:10:27 compute-0 systemd[1]: Started libpod-conmon-416e55ac686a394657a2552af8519f527e5c00c1b49e69badf8ab551fa450746.scope.
Jan 23 19:10:27 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:10:27 compute-0 podman[253478]: 2026-01-23 19:10:27.071233829 +0000 UTC m=+0.266225312 container init 416e55ac686a394657a2552af8519f527e5c00c1b49e69badf8ab551fa450746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 19:10:27 compute-0 podman[253478]: 2026-01-23 19:10:27.077693697 +0000 UTC m=+0.272685160 container start 416e55ac686a394657a2552af8519f527e5c00c1b49e69badf8ab551fa450746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:10:27 compute-0 distracted_noether[253495]: 167 167
Jan 23 19:10:27 compute-0 podman[253478]: 2026-01-23 19:10:27.08108031 +0000 UTC m=+0.276071773 container attach 416e55ac686a394657a2552af8519f527e5c00c1b49e69badf8ab551fa450746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:10:27 compute-0 systemd[1]: libpod-416e55ac686a394657a2552af8519f527e5c00c1b49e69badf8ab551fa450746.scope: Deactivated successfully.
Jan 23 19:10:27 compute-0 conmon[253495]: conmon 416e55ac686a394657a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-416e55ac686a394657a2552af8519f527e5c00c1b49e69badf8ab551fa450746.scope/container/memory.events
Jan 23 19:10:27 compute-0 podman[253478]: 2026-01-23 19:10:27.083082279 +0000 UTC m=+0.278073742 container died 416e55ac686a394657a2552af8519f527e5c00c1b49e69badf8ab551fa450746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_noether, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 23 19:10:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6ee49052cff92674e058c501df4a87c3aa104899fbb113556fb05ac0aa0372e-merged.mount: Deactivated successfully.
Jan 23 19:10:27 compute-0 podman[253478]: 2026-01-23 19:10:27.124676885 +0000 UTC m=+0.319668348 container remove 416e55ac686a394657a2552af8519f527e5c00c1b49e69badf8ab551fa450746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_noether, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 19:10:27 compute-0 systemd[1]: libpod-conmon-416e55ac686a394657a2552af8519f527e5c00c1b49e69badf8ab551fa450746.scope: Deactivated successfully.
Jan 23 19:10:27 compute-0 podman[253519]: 2026-01-23 19:10:27.274664318 +0000 UTC m=+0.038328538 container create 58be2e584f8d68eb2dfe3504ef99e02cbe250d3a86b52fa47433c0c48adb0be1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_montalcini, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 23 19:10:27 compute-0 systemd[1]: Started libpod-conmon-58be2e584f8d68eb2dfe3504ef99e02cbe250d3a86b52fa47433c0c48adb0be1.scope.
Jan 23 19:10:27 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:10:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c971f9219f612cfbd57438ef08b3d5cc639ff92ad4f65c61c7f971443169e500/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:10:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c971f9219f612cfbd57438ef08b3d5cc639ff92ad4f65c61c7f971443169e500/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:10:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c971f9219f612cfbd57438ef08b3d5cc639ff92ad4f65c61c7f971443169e500/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:10:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c971f9219f612cfbd57438ef08b3d5cc639ff92ad4f65c61c7f971443169e500/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:10:27 compute-0 podman[253519]: 2026-01-23 19:10:27.258357499 +0000 UTC m=+0.022021739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:10:27 compute-0 podman[253519]: 2026-01-23 19:10:27.356352822 +0000 UTC m=+0.120017072 container init 58be2e584f8d68eb2dfe3504ef99e02cbe250d3a86b52fa47433c0c48adb0be1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_montalcini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:10:27 compute-0 podman[253519]: 2026-01-23 19:10:27.363316182 +0000 UTC m=+0.126980402 container start 58be2e584f8d68eb2dfe3504ef99e02cbe250d3a86b52fa47433c0c48adb0be1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 23 19:10:27 compute-0 podman[253519]: 2026-01-23 19:10:27.367246799 +0000 UTC m=+0.130911049 container attach 58be2e584f8d68eb2dfe3504ef99e02cbe250d3a86b52fa47433c0c48adb0be1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]: {
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:     "0": [
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:         {
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "devices": [
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "/dev/loop3"
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             ],
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "lv_name": "ceph_lv0",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "lv_size": "21470642176",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "name": "ceph_lv0",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "tags": {
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.cluster_name": "ceph",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.crush_device_class": "",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.encrypted": "0",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.objectstore": "bluestore",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.osd_id": "0",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.type": "block",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.vdo": "0",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.with_tpm": "0"
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             },
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "type": "block",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "vg_name": "ceph_vg0"
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:         }
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:     ],
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:     "1": [
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:         {
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "devices": [
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "/dev/loop4"
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             ],
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "lv_name": "ceph_lv1",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "lv_size": "21470642176",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "name": "ceph_lv1",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "tags": {
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.cluster_name": "ceph",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.crush_device_class": "",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.encrypted": "0",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.objectstore": "bluestore",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.osd_id": "1",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.type": "block",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.vdo": "0",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.with_tpm": "0"
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             },
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "type": "block",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "vg_name": "ceph_vg1"
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:         }
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:     ],
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:     "2": [
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:         {
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "devices": [
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "/dev/loop5"
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             ],
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "lv_name": "ceph_lv2",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "lv_size": "21470642176",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "name": "ceph_lv2",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "tags": {
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.cluster_name": "ceph",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.crush_device_class": "",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.encrypted": "0",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.objectstore": "bluestore",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.osd_id": "2",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.type": "block",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.vdo": "0",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:                 "ceph.with_tpm": "0"
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             },
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "type": "block",
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:             "vg_name": "ceph_vg2"
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:         }
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]:     ]
Jan 23 19:10:27 compute-0 quizzical_montalcini[253535]: }
Jan 23 19:10:27 compute-0 systemd[1]: libpod-58be2e584f8d68eb2dfe3504ef99e02cbe250d3a86b52fa47433c0c48adb0be1.scope: Deactivated successfully.
Jan 23 19:10:27 compute-0 podman[253519]: 2026-01-23 19:10:27.646314723 +0000 UTC m=+0.409978943 container died 58be2e584f8d68eb2dfe3504ef99e02cbe250d3a86b52fa47433c0c48adb0be1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 23 19:10:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:28 compute-0 ceph-mon[75097]: pgmap v1234: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:10:28
Jan 23 19:10:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:10:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:10:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'backups', 'volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'images', 'vms']
Jan 23 19:10:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:10:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c971f9219f612cfbd57438ef08b3d5cc639ff92ad4f65c61c7f971443169e500-merged.mount: Deactivated successfully.
Jan 23 19:10:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:10:29 compute-0 podman[253519]: 2026-01-23 19:10:29.964056234 +0000 UTC m=+2.727720464 container remove 58be2e584f8d68eb2dfe3504ef99e02cbe250d3a86b52fa47433c0c48adb0be1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 19:10:30 compute-0 sudo[253441]: pam_unix(sudo:session): session closed for user root
Jan 23 19:10:30 compute-0 systemd[1]: libpod-conmon-58be2e584f8d68eb2dfe3504ef99e02cbe250d3a86b52fa47433c0c48adb0be1.scope: Deactivated successfully.
Jan 23 19:10:30 compute-0 sudo[253556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:10:30 compute-0 sudo[253556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:10:30 compute-0 sudo[253556]: pam_unix(sudo:session): session closed for user root
Jan 23 19:10:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:10:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:10:30 compute-0 sudo[253581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 19:10:30 compute-0 sudo[253581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:10:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:10:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:10:30 compute-0 podman[253618]: 2026-01-23 19:10:30.48677953 +0000 UTC m=+0.074664545 container create 1c34af93164f74f68d3ddacd0eb72e345172cb2789a3b0092f7ab5ef8d7d2d31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_beaver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 23 19:10:30 compute-0 podman[253618]: 2026-01-23 19:10:30.438781798 +0000 UTC m=+0.026666833 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:10:30 compute-0 systemd[1]: Started libpod-conmon-1c34af93164f74f68d3ddacd0eb72e345172cb2789a3b0092f7ab5ef8d7d2d31.scope.
Jan 23 19:10:30 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:10:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:30 compute-0 ceph-mon[75097]: pgmap v1235: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:30 compute-0 podman[253618]: 2026-01-23 19:10:30.774449065 +0000 UTC m=+0.362334100 container init 1c34af93164f74f68d3ddacd0eb72e345172cb2789a3b0092f7ab5ef8d7d2d31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:10:30 compute-0 podman[253618]: 2026-01-23 19:10:30.781004946 +0000 UTC m=+0.368889961 container start 1c34af93164f74f68d3ddacd0eb72e345172cb2789a3b0092f7ab5ef8d7d2d31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_beaver, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:10:30 compute-0 vigorous_beaver[253635]: 167 167
Jan 23 19:10:30 compute-0 systemd[1]: libpod-1c34af93164f74f68d3ddacd0eb72e345172cb2789a3b0092f7ab5ef8d7d2d31.scope: Deactivated successfully.
Jan 23 19:10:30 compute-0 podman[253618]: 2026-01-23 19:10:30.829285154 +0000 UTC m=+0.417170179 container attach 1c34af93164f74f68d3ddacd0eb72e345172cb2789a3b0092f7ab5ef8d7d2d31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_beaver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 19:10:30 compute-0 podman[253618]: 2026-01-23 19:10:30.829816668 +0000 UTC m=+0.417701693 container died 1c34af93164f74f68d3ddacd0eb72e345172cb2789a3b0092f7ab5ef8d7d2d31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_beaver, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:10:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:10:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:10:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed86aac013c4a8f403d8a58ac8adb20a481d71bc6c5c602b83e4edd75e610272-merged.mount: Deactivated successfully.
Jan 23 19:10:30 compute-0 podman[253618]: 2026-01-23 19:10:30.935831237 +0000 UTC m=+0.523716252 container remove 1c34af93164f74f68d3ddacd0eb72e345172cb2789a3b0092f7ab5ef8d7d2d31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_beaver, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:10:30 compute-0 systemd[1]: libpod-conmon-1c34af93164f74f68d3ddacd0eb72e345172cb2789a3b0092f7ab5ef8d7d2d31.scope: Deactivated successfully.
Jan 23 19:10:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:10:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:10:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:10:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:10:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:10:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:10:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:10:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:10:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:10:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:10:31 compute-0 podman[253661]: 2026-01-23 19:10:31.142876342 +0000 UTC m=+0.088267136 container create da792f3b887a0eac47ec416e61fa09281bacc308baaf7d3e630157505b15aff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_vaughan, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 23 19:10:31 compute-0 podman[253661]: 2026-01-23 19:10:31.078579582 +0000 UTC m=+0.023970406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:10:31 compute-0 systemd[1]: Started libpod-conmon-da792f3b887a0eac47ec416e61fa09281bacc308baaf7d3e630157505b15aff9.scope.
Jan 23 19:10:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2014193bcd0e628b0fa376404a5d6e9af1fb5615839f1ce37b667d14ada39c95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2014193bcd0e628b0fa376404a5d6e9af1fb5615839f1ce37b667d14ada39c95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2014193bcd0e628b0fa376404a5d6e9af1fb5615839f1ce37b667d14ada39c95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2014193bcd0e628b0fa376404a5d6e9af1fb5615839f1ce37b667d14ada39c95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:10:31 compute-0 podman[253661]: 2026-01-23 19:10:31.2778654 +0000 UTC m=+0.223256204 container init da792f3b887a0eac47ec416e61fa09281bacc308baaf7d3e630157505b15aff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:10:31 compute-0 podman[253661]: 2026-01-23 19:10:31.285077656 +0000 UTC m=+0.230468450 container start da792f3b887a0eac47ec416e61fa09281bacc308baaf7d3e630157505b15aff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:10:31 compute-0 podman[253661]: 2026-01-23 19:10:31.293969572 +0000 UTC m=+0.239360366 container attach da792f3b887a0eac47ec416e61fa09281bacc308baaf7d3e630157505b15aff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_vaughan, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 23 19:10:31 compute-0 ceph-mon[75097]: pgmap v1236: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:32 compute-0 lvm[253757]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:10:32 compute-0 lvm[253756]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:10:32 compute-0 lvm[253757]: VG ceph_vg1 finished
Jan 23 19:10:32 compute-0 lvm[253756]: VG ceph_vg0 finished
Jan 23 19:10:32 compute-0 lvm[253759]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:10:32 compute-0 lvm[253759]: VG ceph_vg2 finished
Jan 23 19:10:32 compute-0 festive_vaughan[253678]: {}
Jan 23 19:10:32 compute-0 systemd[1]: libpod-da792f3b887a0eac47ec416e61fa09281bacc308baaf7d3e630157505b15aff9.scope: Deactivated successfully.
Jan 23 19:10:32 compute-0 podman[253661]: 2026-01-23 19:10:32.139819019 +0000 UTC m=+1.085209823 container died da792f3b887a0eac47ec416e61fa09281bacc308baaf7d3e630157505b15aff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:10:32 compute-0 systemd[1]: libpod-da792f3b887a0eac47ec416e61fa09281bacc308baaf7d3e630157505b15aff9.scope: Consumed 1.437s CPU time.
Jan 23 19:10:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-2014193bcd0e628b0fa376404a5d6e9af1fb5615839f1ce37b667d14ada39c95-merged.mount: Deactivated successfully.
Jan 23 19:10:32 compute-0 podman[253661]: 2026-01-23 19:10:32.195310004 +0000 UTC m=+1.140700798 container remove da792f3b887a0eac47ec416e61fa09281bacc308baaf7d3e630157505b15aff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:10:32 compute-0 systemd[1]: libpod-conmon-da792f3b887a0eac47ec416e61fa09281bacc308baaf7d3e630157505b15aff9.scope: Deactivated successfully.
Jan 23 19:10:32 compute-0 sudo[253581]: pam_unix(sudo:session): session closed for user root
Jan 23 19:10:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:10:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:10:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:10:32 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:10:32 compute-0 sudo[253773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 19:10:32 compute-0 sudo[253773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:10:32 compute-0 sudo[253773]: pam_unix(sudo:session): session closed for user root
Jan 23 19:10:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:10:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:10:34 compute-0 ceph-mon[75097]: pgmap v1237: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:10:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:36 compute-0 ceph-mon[75097]: pgmap v1238: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:38 compute-0 ceph-mon[75097]: pgmap v1239: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:10:40 compute-0 ceph-mon[75097]: pgmap v1240: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659032622064907 of space, bias 1.0, pg target 0.1997709786619472 quantized to 32 (current 32)
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005188278444904582 of space, bias 4.0, pg target 0.6225934133885498 quantized to 16 (current 16)
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:10:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:41 compute-0 ceph-mon[75097]: pgmap v1241: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:43 compute-0 ceph-mon[75097]: pgmap v1242: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:10:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:46 compute-0 podman[253798]: 2026-01-23 19:10:46.035299727 +0000 UTC m=+0.116964248 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:10:46 compute-0 ceph-mon[75097]: pgmap v1243: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:48 compute-0 ceph-mon[75097]: pgmap v1244: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:10:50 compute-0 ceph-mon[75097]: pgmap v1245: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:52 compute-0 sshd-session[253824]: Accepted publickey for zuul from 192.168.122.10 port 44924 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 19:10:52 compute-0 systemd-logind[792]: New session 51 of user zuul.
Jan 23 19:10:52 compute-0 systemd[1]: Started Session 51 of User zuul.
Jan 23 19:10:52 compute-0 sshd-session[253824]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 19:10:52 compute-0 sudo[253828]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 23 19:10:52 compute-0 sudo[253828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 19:10:52 compute-0 ceph-mon[75097]: pgmap v1246: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:53 compute-0 podman[253862]: 2026-01-23 19:10:53.333107986 +0000 UTC m=+0.058020647 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 23 19:10:53 compute-0 ceph-mon[75097]: pgmap v1247: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:10:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:54 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14498 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:10:55 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14500 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:10:55 compute-0 ceph-mon[75097]: pgmap v1248: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:55 compute-0 ceph-mon[75097]: from='client.14498 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:10:55 compute-0 ceph-mon[75097]: from='client.14500 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:10:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 23 19:10:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3881169480' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 23 19:10:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:10:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3892153508' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:10:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:10:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3892153508' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:10:56 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3881169480' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 23 19:10:56 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/3892153508' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:10:56 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/3892153508' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:10:57 compute-0 ceph-mon[75097]: pgmap v1249: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:10:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:10:59 compute-0 ceph-mon[75097]: pgmap v1250: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:11:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f06a1c81d60>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f069bba95e0>)]
Jan 23 19:11:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 23 19:11:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 23 19:11:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:11:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f069bbf4df0>)]
Jan 23 19:11:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 23 19:11:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:11:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:11:01 compute-0 ovs-vsctl[254171]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 23 19:11:01 compute-0 nova_compute[240119]: 2026-01-23 19:11:01.344 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:11:01 compute-0 ceph-mon[75097]: pgmap v1251: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:02 compute-0 virtqemud[240019]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 23 19:11:02 compute-0 virtqemud[240019]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 23 19:11:02 compute-0 virtqemud[240019]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 23 19:11:02 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: cache status {prefix=cache status} (starting...)
Jan 23 19:11:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:02 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: client ls {prefix=client ls} (starting...)
Jan 23 19:11:02 compute-0 lvm[254489]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:11:02 compute-0 lvm[254489]: VG ceph_vg0 finished
Jan 23 19:11:02 compute-0 ceph-mon[75097]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.rszfwe(active, since 36m)
Jan 23 19:11:03 compute-0 lvm[254525]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:11:03 compute-0 lvm[254525]: VG ceph_vg1 finished
Jan 23 19:11:03 compute-0 lvm[254528]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:11:03 compute-0 lvm[254528]: VG ceph_vg2 finished
Jan 23 19:11:03 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14508 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:03 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: damage ls {prefix=damage ls} (starting...)
Jan 23 19:11:03 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: dump loads {prefix=dump loads} (starting...)
Jan 23 19:11:03 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14510 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:03 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 23 19:11:03 compute-0 ceph-mon[75097]: pgmap v1252: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:03 compute-0 ceph-mon[75097]: mgrmap e17: compute-0.rszfwe(active, since 36m)
Jan 23 19:11:03 compute-0 ceph-mon[75097]: from='client.14508 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:03 compute-0 ceph-mon[75097]: from='client.14510 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:03 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 23 19:11:04 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 23 19:11:04 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14514 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 23 19:11:04 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/380957634' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 23 19:11:04 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 23 19:11:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:11:04 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 23 19:11:04 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 23 19:11:04 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14516 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:04 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:11:04.609+0000 7f06d8601640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 19:11:04 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 19:11:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:11:04 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2154905720' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:11:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 23 19:11:04 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: ops {prefix=ops} (starting...)
Jan 23 19:11:05 compute-0 ceph-mon[75097]: from='client.14514 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:05 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/380957634' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 23 19:11:05 compute-0 ceph-mon[75097]: from='client.14516 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:05 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2154905720' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:11:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 23 19:11:05 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3585218719' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 23 19:11:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 23 19:11:05 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3686006849' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 23 19:11:05 compute-0 nova_compute[240119]: 2026-01-23 19:11:05.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:11:05 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session ls {prefix=session ls} (starting...)
Jan 23 19:11:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 23 19:11:05 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3939560842' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 23 19:11:05 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: status {prefix=status} (starting...)
Jan 23 19:11:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 23 19:11:05 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4089298354' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 23 19:11:06 compute-0 ceph-mon[75097]: pgmap v1253: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 23 19:11:06 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3585218719' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 23 19:11:06 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3686006849' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 23 19:11:06 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3939560842' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 23 19:11:06 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4089298354' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 23 19:11:06 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14530 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 23 19:11:06 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/67872424' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 23 19:11:06 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14532 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 23 19:11:06 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3047393346' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 23 19:11:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 23 19:11:07 compute-0 ceph-mon[75097]: from='client.14530 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:07 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/67872424' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 23 19:11:07 compute-0 ceph-mon[75097]: from='client.14532 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:07 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3047393346' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 23 19:11:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 23 19:11:07 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2880174174' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 23 19:11:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 23 19:11:07 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1790483864' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 23 19:11:07 compute-0 nova_compute[240119]: 2026-01-23 19:11:07.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:11:07 compute-0 nova_compute[240119]: 2026-01-23 19:11:07.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:11:07 compute-0 nova_compute[240119]: 2026-01-23 19:11:07.373 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:11:07 compute-0 nova_compute[240119]: 2026-01-23 19:11:07.373 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:11:07 compute-0 nova_compute[240119]: 2026-01-23 19:11:07.373 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:11:07 compute-0 nova_compute[240119]: 2026-01-23 19:11:07.374 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:11:07 compute-0 nova_compute[240119]: 2026-01-23 19:11:07.374 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:11:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 23 19:11:07 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2532075953' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 23 19:11:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 23 19:11:07 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1363950951' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 23 19:11:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:11:07 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1279988540' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:11:07 compute-0 nova_compute[240119]: 2026-01-23 19:11:07.994 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:11:08 compute-0 ceph-mon[75097]: pgmap v1254: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 23 19:11:08 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2880174174' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 23 19:11:08 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1790483864' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 23 19:11:08 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2532075953' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 23 19:11:08 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1363950951' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 23 19:11:08 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1279988540' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:11:08 compute-0 nova_compute[240119]: 2026-01-23 19:11:08.138 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:11:08 compute-0 nova_compute[240119]: 2026-01-23 19:11:08.139 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4836MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:11:08 compute-0 nova_compute[240119]: 2026-01-23 19:11:08.139 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:11:08 compute-0 nova_compute[240119]: 2026-01-23 19:11:08.140 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:11:08 compute-0 nova_compute[240119]: 2026-01-23 19:11:08.215 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:11:08 compute-0 nova_compute[240119]: 2026-01-23 19:11:08.216 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:11:08 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14546 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:08 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:11:08.256+0000 7f06d8601640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 19:11:08 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 19:11:08 compute-0 nova_compute[240119]: 2026-01-23 19:11:08.260 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:11:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 23 19:11:08 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1536569462' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 23 19:11:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 23 19:11:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:11:08 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3036015582' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:11:08 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14554 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:08 compute-0 nova_compute[240119]: 2026-01-23 19:11:08.774 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:11:08 compute-0 nova_compute[240119]: 2026-01-23 19:11:08.780 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:11:08 compute-0 nova_compute[240119]: 2026-01-23 19:11:08.797 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:11:08 compute-0 nova_compute[240119]: 2026-01-23 19:11:08.799 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:11:08 compute-0 nova_compute[240119]: 2026-01-23 19:11:08.799 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:11:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 23 19:11:08 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3682405941' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 23 19:11:09 compute-0 ceph-mon[75097]: from='client.14546 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:09 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1536569462' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 23 19:11:09 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3036015582' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:11:09 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3682405941' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 23 19:11:09 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:38:59.365318+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 786432 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:00.365487+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69386240 unmapped: 778240 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:01.365637+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:30.933057+0000 osd.2 (osd.2) 116 : cluster [DBG] 7.2 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:30.943589+0000 osd.2 (osd.2) 117 : cluster [DBG] 7.2 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69386240 unmapped: 778240 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 819730 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 117)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:30.933057+0000 osd.2 (osd.2) 116 : cluster [DBG] 7.2 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:30.943589+0000 osd.2 (osd.2) 117 : cluster [DBG] 7.2 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:02.365795+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69386240 unmapped: 778240 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:03.366076+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69386240 unmapped: 778240 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:04.366321+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69402624 unmapped: 761856 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:05.366596+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69402624 unmapped: 761856 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:06.366717+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69410816 unmapped: 753664 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 819730 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:07.366839+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69410816 unmapped: 753664 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:08.366981+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 745472 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:09.367122+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.022749901s of 12.031757355s, submitted: 4
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 745472 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:10.367271+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:39.958712+0000 osd.2 (osd.2) 118 : cluster [DBG] 7.8 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:39.969248+0000 osd.2 (osd.2) 119 : cluster [DBG] 7.8 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 119)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:39.958712+0000 osd.2 (osd.2) 118 : cluster [DBG] 7.8 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:39.969248+0000 osd.2 (osd.2) 119 : cluster [DBG] 7.8 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 737280 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:11.367542+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:41.004897+0000 osd.2 (osd.2) 120 : cluster [DBG] 8.4 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:41.015434+0000 osd.2 (osd.2) 121 : cluster [DBG] 8.4 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 121)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:41.004897+0000 osd.2 (osd.2) 120 : cluster [DBG] 8.4 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:41.015434+0000 osd.2 (osd.2) 121 : cluster [DBG] 8.4 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 737280 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826963 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:12.367849+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:42.012953+0000 osd.2 (osd.2) 122 : cluster [DBG] 7.a scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:42.023605+0000 osd.2 (osd.2) 123 : cluster [DBG] 7.a scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 123)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:42.012953+0000 osd.2 (osd.2) 122 : cluster [DBG] 7.a scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:42.023605+0000 osd.2 (osd.2) 123 : cluster [DBG] 7.a scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 729088 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:13.368102+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 720896 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:14.368247+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 720896 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:15.368434+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 712704 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:16.368601+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 712704 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826963 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:17.368802+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 712704 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:18.369074+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:48.062283+0000 osd.2 (osd.2) 124 : cluster [DBG] 7.e scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:48.072909+0000 osd.2 (osd.2) 125 : cluster [DBG] 7.e scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 125)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:48.062283+0000 osd.2 (osd.2) 124 : cluster [DBG] 7.e scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:48.072909+0000 osd.2 (osd.2) 125 : cluster [DBG] 7.e scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 712704 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:19.369337+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:49.105549+0000 osd.2 (osd.2) 126 : cluster [DBG] 3.e scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:49.116099+0000 osd.2 (osd.2) 127 : cluster [DBG] 3.e scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 127)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:49.105549+0000 osd.2 (osd.2) 126 : cluster [DBG] 3.e scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:49.116099+0000 osd.2 (osd.2) 127 : cluster [DBG] 3.e scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 712704 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:20.369534+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.382591248s of 11.407349586s, submitted: 10
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 704512 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:21.369634+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 1 last_log 128 sent 127 num 1 unsent 1 sending 1
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:51.200117+0000 osd.2 (osd.2) 128 : cluster [DBG] 3.11 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 834198 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:22.369783+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 129 sent 128 num 2 unsent 1 sending 1
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:51.370268+0000 osd.2 (osd.2) 129 : cluster [DBG] 3.11 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 704512 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 128)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:51.200117+0000 osd.2 (osd.2) 128 : cluster [DBG] 3.11 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 129)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:51.370268+0000 osd.2 (osd.2) 129 : cluster [DBG] 3.11 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:23.369980+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69468160 unmapped: 696320 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:24.370147+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69468160 unmapped: 696320 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:25.370339+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:55.099047+0000 osd.2 (osd.2) 130 : cluster [DBG] 7.15 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:55.108965+0000 osd.2 (osd.2) 131 : cluster [DBG] 7.15 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69484544 unmapped: 679936 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 131)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:55.099047+0000 osd.2 (osd.2) 130 : cluster [DBG] 7.15 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:55.108965+0000 osd.2 (osd.2) 131 : cluster [DBG] 7.15 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:26.370612+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:56.087462+0000 osd.2 (osd.2) 132 : cluster [DBG] 7.5 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:56.098120+0000 osd.2 (osd.2) 133 : cluster [DBG] 7.5 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69476352 unmapped: 688128 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 133)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:56.087462+0000 osd.2 (osd.2) 132 : cluster [DBG] 7.5 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:56.098120+0000 osd.2 (osd.2) 133 : cluster [DBG] 7.5 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841435 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:27.370820+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:57.128904+0000 osd.2 (osd.2) 134 : cluster [DBG] 8.1c scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:57.139501+0000 osd.2 (osd.2) 135 : cluster [DBG] 8.1c scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 671744 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 135)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:57.128904+0000 osd.2 (osd.2) 134 : cluster [DBG] 8.1c scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:57.139501+0000 osd.2 (osd.2) 135 : cluster [DBG] 8.1c scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:28.371429+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 663552 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:29.371582+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:59.149012+0000 osd.2 (osd.2) 136 : cluster [DBG] 8.12 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:39:59.159687+0000 osd.2 (osd.2) 137 : cluster [DBG] 8.12 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 663552 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:30.371806+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 655360 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 137)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:59.149012+0000 osd.2 (osd.2) 136 : cluster [DBG] 8.12 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:39:59.159687+0000 osd.2 (osd.2) 137 : cluster [DBG] 8.12 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:31.371929+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 655360 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 843848 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:32.372038+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 647168 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:33.372209+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 647168 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:34.372741+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 647168 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:35.373004+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 630784 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:36.373141+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 630784 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:37.373290+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 843848 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 630784 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:38.373427+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 622592 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.673845291s of 17.979822159s, submitted: 10
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:39.373548+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:09.345969+0000 osd.2 (osd.2) 138 : cluster [DBG] 7.1c scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:09.356516+0000 osd.2 (osd.2) 139 : cluster [DBG] 7.1c scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 614400 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 139)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:09.345969+0000 osd.2 (osd.2) 138 : cluster [DBG] 7.1c scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:09.356516+0000 osd.2 (osd.2) 139 : cluster [DBG] 7.1c scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:40.373767+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 606208 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:41.373941+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 606208 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:42.374159+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846261 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69566464 unmapped: 598016 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:43.374293+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:13.349486+0000 osd.2 (osd.2) 140 : cluster [DBG] 3.16 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:13.359910+0000 osd.2 (osd.2) 141 : cluster [DBG] 3.16 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 589824 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:44.374507+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 4 last_log 143 sent 141 num 4 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:14.302648+0000 osd.2 (osd.2) 142 : cluster [DBG] 7.11 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:14.313021+0000 osd.2 (osd.2) 143 : cluster [DBG] 7.11 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 581632 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 141)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:13.349486+0000 osd.2 (osd.2) 140 : cluster [DBG] 3.16 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:13.359910+0000 osd.2 (osd.2) 141 : cluster [DBG] 3.16 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:45.374720+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1605632 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 143)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:14.302648+0000 osd.2 (osd.2) 142 : cluster [DBG] 7.11 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:14.313021+0000 osd.2 (osd.2) 143 : cluster [DBG] 7.11 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:46.374844+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1605632 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:47.375025+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 851087 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 1597440 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:48.375152+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 1597440 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:49.375367+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 1597440 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:50.375783+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 1597440 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:51.375916+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1589248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:52.376052+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 851087 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1589248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:53.376200+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1589248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:54.376352+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1581056 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:55.376550+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1581056 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:56.376773+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1572864 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:57.377023+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 851087 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1572864 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:58.377212+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1564672 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:59.377360+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1564672 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:00.377549+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1564672 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.815608978s of 22.004592896s, submitted: 6
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:01.377769+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:31.350608+0000 osd.2 (osd.2) 144 : cluster [DBG] 8.1b scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:31.361123+0000 osd.2 (osd.2) 145 : cluster [DBG] 8.1b scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 145)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:31.350608+0000 osd.2 (osd.2) 144 : cluster [DBG] 8.1b scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:31.361123+0000 osd.2 (osd.2) 145 : cluster [DBG] 8.1b scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1548288 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:02.378003+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855913 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69672960 unmapped: 1540096 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:03.378125+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:32.392372+0000 osd.2 (osd.2) 146 : cluster [DBG] 3.18 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:32.402963+0000 osd.2 (osd.2) 147 : cluster [DBG] 3.18 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 147)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:32.392372+0000 osd.2 (osd.2) 146 : cluster [DBG] 3.18 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:32.402963+0000 osd.2 (osd.2) 147 : cluster [DBG] 3.18 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69672960 unmapped: 1540096 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:04.378376+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:33.401232+0000 osd.2 (osd.2) 148 : cluster [DBG] 6.8 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:33.411765+0000 osd.2 (osd.2) 149 : cluster [DBG] 6.8 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 1523712 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 149)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:33.401232+0000 osd.2 (osd.2) 148 : cluster [DBG] 6.8 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:33.411765+0000 osd.2 (osd.2) 149 : cluster [DBG] 6.8 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:05.378618+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:34.432199+0000 osd.2 (osd.2) 150 : cluster [DBG] 6.f scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:34.453341+0000 osd.2 (osd.2) 151 : cluster [DBG] 6.f scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 1515520 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 151)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:34.432199+0000 osd.2 (osd.2) 150 : cluster [DBG] 6.f scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:34.453341+0000 osd.2 (osd.2) 151 : cluster [DBG] 6.f scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:06.378803+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 1515520 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:07.378969+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 860735 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 1507328 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:08.379128+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 1507328 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:09.379273+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:38.414093+0000 osd.2 (osd.2) 152 : cluster [DBG] 11.11 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:38.424682+0000 osd.2 (osd.2) 153 : cluster [DBG] 11.11 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 1490944 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 153)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:38.414093+0000 osd.2 (osd.2) 152 : cluster [DBG] 11.11 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:38.424682+0000 osd.2 (osd.2) 153 : cluster [DBG] 11.11 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:10.379486+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 1490944 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:11.379665+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:40.399236+0000 osd.2 (osd.2) 154 : cluster [DBG] 11.1a scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:40.409756+0000 osd.2 (osd.2) 155 : cluster [DBG] 11.1a scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 1474560 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 155)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:40.399236+0000 osd.2 (osd.2) 154 : cluster [DBG] 11.1a scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:40.409756+0000 osd.2 (osd.2) 155 : cluster [DBG] 11.1a scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:12.379888+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865565 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 1474560 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.593384743s of 12.002846718s, submitted: 12
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:13.380012+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:43.353548+0000 osd.2 (osd.2) 156 : cluster [DBG] 11.1b scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:43.364137+0000 osd.2 (osd.2) 157 : cluster [DBG] 11.1b scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 1466368 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:14.380208+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 157)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:43.353548+0000 osd.2 (osd.2) 156 : cluster [DBG] 11.1b scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:43.364137+0000 osd.2 (osd.2) 157 : cluster [DBG] 11.1b scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 1466368 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:15.380382+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 1458176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:16.380536+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 1458176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:17.380621+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 867980 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 1458176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:18.380789+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1449984 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:19.380938+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1449984 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:20.381093+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1433600 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:21.381230+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 1 last_log 158 sent 157 num 1 unsent 1 sending 1
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:51.378313+0000 osd.2 (osd.2) 158 : cluster [DBG] 11.2 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1417216 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 158)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:51.378313+0000 osd.2 (osd.2) 158 : cluster [DBG] 11.2 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:22.381431+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 160 sent 158 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:51.388814+0000 osd.2 (osd.2) 159 : cluster [DBG] 11.2 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:52.376223+0000 osd.2 (osd.2) 160 : cluster [DBG] 11.1f scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 872808 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1409024 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 160)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:51.388814+0000 osd.2 (osd.2) 159 : cluster [DBG] 11.2 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:52.376223+0000 osd.2 (osd.2) 160 : cluster [DBG] 11.1f scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:23.381669+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 1 last_log 161 sent 160 num 1 unsent 1 sending 1
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:52.386820+0000 osd.2 (osd.2) 161 : cluster [DBG] 11.1f scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1400832 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 161)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:52.386820+0000 osd.2 (osd.2) 161 : cluster [DBG] 11.1f scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:24.381877+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1400832 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:25.382061+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.013468742s of 12.056985855s, submitted: 6
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69820416 unmapped: 1392640 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:26.382211+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:55.410505+0000 osd.2 (osd.2) 162 : cluster [DBG] 11.1e scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:40:55.421067+0000 osd.2 (osd.2) 163 : cluster [DBG] 11.1e scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69820416 unmapped: 1392640 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 163)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:55.410505+0000 osd.2 (osd.2) 162 : cluster [DBG] 11.1e scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:40:55.421067+0000 osd.2 (osd.2) 163 : cluster [DBG] 11.1e scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:27.382428+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 875223 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69820416 unmapped: 1392640 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:28.382621+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1384448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:29.382771+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1384448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:30.382998+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1376256 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:31.383176+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1359872 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:32.383368+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 875223 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1359872 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:33.383546+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1351680 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:34.383730+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1351680 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:35.383914+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1343488 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.933883667s of 10.938046455s, submitted: 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:36.384024+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:06.348577+0000 osd.2 (osd.2) 164 : cluster [DBG] 11.8 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:06.359060+0000 osd.2 (osd.2) 165 : cluster [DBG] 11.8 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1343488 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:37.384258+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880051 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 165)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:06.348577+0000 osd.2 (osd.2) 164 : cluster [DBG] 11.8 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:06.359060+0000 osd.2 (osd.2) 165 : cluster [DBG] 11.8 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 1327104 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:38.384365+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 3 last_log 168 sent 165 num 3 unsent 3 sending 3
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:07.393416+0000 osd.2 (osd.2) 166 : cluster [DBG] 11.1c scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:07.403940+0000 osd.2 (osd.2) 167 : cluster [DBG] 11.1c scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:08.377094+0000 osd.2 (osd.2) 168 : cluster [DBG] 11.b scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 168)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:07.393416+0000 osd.2 (osd.2) 166 : cluster [DBG] 11.1c scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:07.403940+0000 osd.2 (osd.2) 167 : cluster [DBG] 11.1c scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:08.377094+0000 osd.2 (osd.2) 168 : cluster [DBG] 11.b scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 1318912 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:39.384544+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 1 last_log 169 sent 168 num 1 unsent 1 sending 1
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:08.387649+0000 osd.2 (osd.2) 169 : cluster [DBG] 11.b scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 1310720 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 169)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:08.387649+0000 osd.2 (osd.2) 169 : cluster [DBG] 11.b scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:40.384724+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 1294336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:41.384899+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 1294336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:42.385079+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 882464 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1286144 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:43.385223+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1286144 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:44.385376+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:14.341122+0000 osd.2 (osd.2) 170 : cluster [DBG] 11.3 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:14.351629+0000 osd.2 (osd.2) 171 : cluster [DBG] 11.3 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 1277952 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 171)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:14.341122+0000 osd.2 (osd.2) 170 : cluster [DBG] 11.3 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:14.351629+0000 osd.2 (osd.2) 171 : cluster [DBG] 11.3 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:45.385605+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:15.296421+0000 osd.2 (osd.2) 172 : cluster [DBG] 11.15 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:15.306992+0000 osd.2 (osd.2) 173 : cluster [DBG] 11.15 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1261568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 173)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:15.296421+0000 osd.2 (osd.2) 172 : cluster [DBG] 11.15 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:15.306992+0000 osd.2 (osd.2) 173 : cluster [DBG] 11.15 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:46.385826+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 1253376 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:47.386018+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 887292 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 1253376 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.633822441s of 11.922324181s, submitted: 10
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:48.386183+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:18.270943+0000 osd.2 (osd.2) 174 : cluster [DBG] 11.d scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:18.281493+0000 osd.2 (osd.2) 175 : cluster [DBG] 11.d scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69967872 unmapped: 1245184 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 175)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:18.270943+0000 osd.2 (osd.2) 174 : cluster [DBG] 11.d scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:18.281493+0000 osd.2 (osd.2) 175 : cluster [DBG] 11.d scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:49.386384+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:19.287096+0000 osd.2 (osd.2) 176 : cluster [DBG] 11.18 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:19.297602+0000 osd.2 (osd.2) 177 : cluster [DBG] 11.18 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 1228800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 177)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:19.287096+0000 osd.2 (osd.2) 176 : cluster [DBG] 11.18 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:19.297602+0000 osd.2 (osd.2) 177 : cluster [DBG] 11.18 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:50.386577+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 1228800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:51.386699+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1220608 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:52.386829+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892120 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1220608 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:53.386951+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:23.334680+0000 osd.2 (osd.2) 178 : cluster [DBG] 11.12 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:23.345027+0000 osd.2 (osd.2) 179 : cluster [DBG] 11.12 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1196032 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 179)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:23.334680+0000 osd.2 (osd.2) 178 : cluster [DBG] 11.12 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:23.345027+0000 osd.2 (osd.2) 179 : cluster [DBG] 11.12 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:54.387182+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70025216 unmapped: 1187840 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:55.387424+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:25.326838+0000 osd.2 (osd.2) 180 : cluster [DBG] 11.9 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:25.340991+0000 osd.2 (osd.2) 181 : cluster [DBG] 11.9 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 181)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:25.326838+0000 osd.2 (osd.2) 180 : cluster [DBG] 11.9 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:25.340991+0000 osd.2 (osd.2) 181 : cluster [DBG] 11.9 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70025216 unmapped: 1187840 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:56.387653+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1179648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:57.387796+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 896948 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1179648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.023686409s of 10.039933205s, submitted: 8
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:58.387978+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:28.310881+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.8 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:28.346356+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.8 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 183)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:28.310881+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.8 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:28.346356+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.8 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1179648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:59.388174+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:29.326486+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.e scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:29.365306+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.e scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 185)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:29.326486+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.e scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:29.365306+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.e scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1155072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:00.388403+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:30.347127+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.17 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:30.371937+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.17 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 187)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:30.347127+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.17 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:30.371937+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.17 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1155072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:01.388643+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1155072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:02.388842+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904183 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1146880 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:03.388971+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1146880 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:04.389104+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 1138688 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:05.389283+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1122304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:06.389430+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 1114112 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:07.389588+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:37.281400+0000 osd.2 (osd.2) 188 : cluster [DBG] 9.f scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:37.320304+0000 osd.2 (osd.2) 189 : cluster [DBG] 9.f scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906594 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 1114112 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 189)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:37.281400+0000 osd.2 (osd.2) 188 : cluster [DBG] 9.f scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:37.320304+0000 osd.2 (osd.2) 189 : cluster [DBG] 9.f scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:08.389797+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 1114112 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:09.389914+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 1114112 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:10.390040+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1105920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.991673470s of 13.063571930s, submitted: 8
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:11.390156+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 1 last_log 190 sent 189 num 1 unsent 1 sending 1
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:41.374445+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.c scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1105920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 190)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:41.374445+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.c scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:12.390343+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 1 last_log 191 sent 190 num 1 unsent 1 sending 1
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:41.402710+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.c scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909005 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1105920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 191)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:41.402710+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.c scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:13.390584+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:43.354035+0000 osd.2 (osd.2) 192 : cluster [DBG] 9.7 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:43.389392+0000 osd.2 (osd.2) 193 : cluster [DBG] 9.7 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1081344 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 193)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:43.354035+0000 osd.2 (osd.2) 192 : cluster [DBG] 9.7 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:43.389392+0000 osd.2 (osd.2) 193 : cluster [DBG] 9.7 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:14.390835+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:44.340363+0000 osd.2 (osd.2) 194 : cluster [DBG] 9.6 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:44.372131+0000 osd.2 (osd.2) 195 : cluster [DBG] 9.6 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1081344 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 195)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:44.340363+0000 osd.2 (osd.2) 194 : cluster [DBG] 9.6 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:44.372131+0000 osd.2 (osd.2) 195 : cluster [DBG] 9.6 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:15.391254+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 1 last_log 196 sent 195 num 1 unsent 1 sending 1
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:45.374950+0000 osd.2 (osd.2) 196 : cluster [DBG] 9.19 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1073152 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 196)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:45.374950+0000 osd.2 (osd.2) 196 : cluster [DBG] 9.19 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:16.391487+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 1 last_log 197 sent 196 num 1 unsent 1 sending 1
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:45.417669+0000 osd.2 (osd.2) 197 : cluster [DBG] 9.19 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1056768 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 197)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:45.417669+0000 osd.2 (osd.2) 197 : cluster [DBG] 9.19 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:17.391781+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916240 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1056768 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:18.391958+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1056768 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:19.392101+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1048576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:20.392247+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:50.307022+0000 osd.2 (osd.2) 198 : cluster [DBG] 9.18 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:50.338819+0000 osd.2 (osd.2) 199 : cluster [DBG] 9.18 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1056768 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 199)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:50.307022+0000 osd.2 (osd.2) 198 : cluster [DBG] 9.18 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:50.338819+0000 osd.2 (osd.2) 199 : cluster [DBG] 9.18 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:21.392486+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:51.308234+0000 osd.2 (osd.2) 200 : cluster [DBG] 9.13 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  will send 2026-01-23T18:41:51.339942+0000 osd.2 (osd.2) 201 : cluster [DBG] 9.13 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1056768 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client handle_log_ack log(last 201)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:51.308234+0000 osd.2 (osd.2) 200 : cluster [DBG] 9.13 scrub starts
Jan 23 19:11:09 compute-0 ceph-osd[88072]: log_client  logged 2026-01-23T18:41:51.339942+0000 osd.2 (osd.2) 201 : cluster [DBG] 9.13 scrub ok
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:22.392785+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1056768 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:23.392942+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1048576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:24.393060+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1048576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:25.393243+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1040384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:26.393410+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1040384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:27.393578+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1040384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:28.393719+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1032192 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:29.393881+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1032192 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:30.394063+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 1024000 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:31.394195+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 1024000 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:32.394583+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1015808 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:33.394721+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1015808 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:34.394842+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1015808 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:35.395016+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1007616 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:36.395146+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1007616 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:37.395293+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 999424 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:38.395470+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 999424 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:39.395603+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 991232 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:40.395753+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 991232 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:41.395901+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 991232 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:42.396045+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 983040 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:43.396238+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 983040 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:44.396390+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 974848 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:45.396664+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 983040 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:46.396795+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 983040 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:47.396942+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 974848 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:48.397075+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 974848 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:49.397216+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 966656 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:50.397392+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 966656 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:51.397614+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 966656 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:52.397778+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 958464 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:53.397964+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 958464 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:54.398126+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 950272 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:55.398355+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 950272 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:56.398519+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 950272 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:57.398689+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 942080 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:58.398835+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 942080 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:59.398996+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 933888 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:00.399146+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 933888 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:01.399277+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 925696 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:02.399419+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 925696 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:03.399714+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 925696 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:04.399886+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 917504 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:05.400121+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 917504 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:06.400308+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 917504 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:07.400490+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 909312 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:08.400704+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 909312 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:09.400865+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 909312 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:10.401031+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 901120 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:11.401198+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 901120 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:12.401431+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 892928 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:13.401611+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 892928 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:14.401747+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 884736 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:15.401999+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 884736 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:16.402130+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 884736 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:17.402289+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 876544 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:18.402459+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 876544 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:19.402648+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 868352 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:20.402842+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 868352 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:21.402992+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 868352 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:22.403206+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 860160 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:23.403371+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 860160 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:24.403666+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 851968 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:25.403870+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 851968 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:26.404018+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 843776 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:27.404159+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 843776 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:28.404280+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 843776 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:29.404469+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 835584 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:30.404660+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 835584 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:31.404839+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 835584 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:32.404970+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 827392 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:33.405140+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 827392 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:34.405328+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 827392 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:35.405496+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 819200 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:36.405619+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 819200 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:37.405757+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 811008 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:38.405887+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 811008 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:39.406020+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 811008 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:40.406198+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 802816 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:41.406319+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 802816 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:42.406458+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 794624 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:43.406595+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 794624 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:44.406733+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 778240 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:45.406887+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 770048 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:46.407055+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 770048 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:47.407253+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 770048 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:48.415936+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 761856 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:49.416069+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 761856 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:50.416181+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 753664 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:51.416287+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 753664 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:52.416422+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 753664 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:53.416546+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70467584 unmapped: 745472 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:54.416696+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70467584 unmapped: 745472 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:55.416929+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 737280 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:56.417354+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 737280 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:57.417519+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 729088 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:58.417668+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 729088 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:59.417823+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 729088 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:00.417996+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70500352 unmapped: 712704 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:01.418181+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70500352 unmapped: 712704 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:02.418399+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70508544 unmapped: 704512 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:03.418530+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70508544 unmapped: 704512 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:04.418704+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70524928 unmapped: 688128 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:05.418934+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 679936 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:06.419118+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 679936 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:07.419284+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70541312 unmapped: 671744 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:08.419415+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70541312 unmapped: 671744 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:09.419595+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70541312 unmapped: 671744 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:10.419775+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70549504 unmapped: 663552 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:11.419938+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70549504 unmapped: 663552 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:12.420114+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70557696 unmapped: 655360 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:13.420235+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70557696 unmapped: 655360 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:14.420362+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 647168 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:15.420540+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 647168 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:16.420682+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 647168 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:17.420830+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70574080 unmapped: 638976 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:18.420958+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70574080 unmapped: 638976 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:19.421092+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70582272 unmapped: 630784 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:20.421234+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 622592 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:21.421408+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 622592 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:22.421586+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 622592 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:23.421783+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 614400 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:24.421953+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 614400 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:25.422136+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 606208 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:26.422275+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 606208 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:27.422392+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 606208 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:28.422580+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 598016 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:29.422705+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 581632 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:30.422861+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 573440 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:31.422980+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 573440 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:32.423169+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 565248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:33.423291+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 565248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:34.423397+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 565248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:35.423607+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 557056 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:36.423735+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 557056 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:37.423898+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 557056 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:38.424074+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 557056 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:39.424239+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 540672 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:40.424396+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 540672 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:41.472041+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 532480 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:42.472194+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 532480 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:43.472359+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 524288 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:44.472520+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 516096 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:45.472766+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 516096 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:46.472932+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 507904 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:47.473075+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 507904 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:48.473230+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 499712 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:49.473383+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 499712 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:50.473591+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 499712 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:51.473735+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 491520 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:52.473928+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 491520 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:53.474176+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 483328 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:54.474316+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 483328 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:55.474951+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 483328 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:56.475305+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 475136 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:57.475429+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 475136 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:58.475586+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 466944 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:59.475707+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 466944 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:00.475823+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 458752 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:01.475971+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 450560 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:02.476117+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 450560 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:03.476268+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 442368 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:04.476425+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 425984 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:05.476649+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 425984 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:06.476792+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 417792 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:07.476947+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 417792 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:08.477124+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 417792 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:09.477331+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 409600 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:10.477476+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 409600 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:11.477586+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 401408 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:12.477698+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 401408 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:13.477841+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 401408 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:14.477953+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 385024 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:15.478099+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 385024 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:16.478259+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 376832 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:17.478419+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 376832 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:18.478658+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 376832 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:19.478788+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 368640 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:20.478974+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 368640 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:21.479128+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 360448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:22.479317+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 360448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:23.479461+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 360448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:24.479610+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 360448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:25.479759+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 360448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:26.479932+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 352256 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:27.480069+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:28.480187+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 344064 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:29.480329+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 344064 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:30.480470+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 327680 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:31.480624+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 327680 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:32.480768+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 327680 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:33.480966+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70893568 unmapped: 319488 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:34.481233+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70893568 unmapped: 319488 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:35.481433+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 311296 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:36.481620+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 311296 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:37.481769+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 303104 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:38.481885+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 303104 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:39.482033+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 303104 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:40.482161+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 303104 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:41.482286+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 303104 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:42.482425+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 294912 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:43.482551+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 294912 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:44.482719+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 294912 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:45.482908+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 286720 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:46.483033+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 286720 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:47.483157+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 278528 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:48.483293+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 278528 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:49.483450+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 278528 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:50.483636+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 262144 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:51.483760+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 262144 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:52.483894+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 262144 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:53.484028+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 253952 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:54.484175+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 253952 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:55.484350+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 253952 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:56.484479+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 245760 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:57.484598+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 245760 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:58.484709+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 237568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:59.484836+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 237568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:00.484975+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 221184 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Cumulative writes: 5436 writes, 23K keys, 5436 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5436 writes, 801 syncs, 6.79 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5436 writes, 23K keys, 5436 commit groups, 1.0 writes per commit group, ingest: 18.34 MB, 0.03 MB/s
                                           Interval WAL: 5436 writes, 801 syncs, 6.79 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:01.485089+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 147456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:02.485218+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 147456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:03.485392+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 147456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:04.485591+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 139264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:05.485793+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 139264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:06.485918+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 131072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:07.486052+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 131072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:08.486210+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 131072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:09.486326+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 122880 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:10.486498+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 106496 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:11.486715+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 98304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:12.486836+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 98304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:13.486948+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 90112 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:14.487091+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 90112 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:15.487276+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 81920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:16.487434+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 73728 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:17.487547+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 73728 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:18.487690+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 65536 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:19.487816+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 65536 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:20.487989+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 65536 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:21.488133+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 57344 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:22.488279+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 57344 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:23.488413+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 49152 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:24.488553+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 49152 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:25.488770+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 40960 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:26.488973+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 32768 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:27.489126+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 32768 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:28.489234+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 32768 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:29.489372+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 24576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:30.489509+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 24576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:31.489682+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 24576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:32.489814+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:33.489971+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:34.490148+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 8192 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:35.490354+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 0 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:36.490596+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1040384 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:37.490810+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1040384 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:38.490967+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1032192 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:39.491107+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1032192 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:40.491264+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1032192 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:41.491415+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1032192 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:42.491617+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 1024000 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:43.491754+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 1024000 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:44.491907+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1015808 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:45.492111+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1007616 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:46.492289+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1007616 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:47.492424+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 999424 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:48.492602+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 999424 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:49.492741+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 991232 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:50.492881+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 991232 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:51.493029+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 991232 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:52.493159+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 983040 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:53.493332+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 983040 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:54.493467+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 983040 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:55.493757+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 974848 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:56.493912+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 974848 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:57.494058+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:58.494238+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:59.494394+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 958464 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:00.494527+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 950272 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:01.494671+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 950272 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:02.494851+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 942080 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 291.457550049s of 291.542022705s, submitted: 12
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:03.494951+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 843776 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:04.495074+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 802816 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:05.495263+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 770048 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:06.495387+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 770048 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:07.495522+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 770048 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:08.495619+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 770048 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:09.495816+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 770048 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:10.495974+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 770048 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:11.496144+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 761856 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:12.496319+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 761856 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:13.496485+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 761856 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:14.496673+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 753664 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:15.496850+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 753664 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:16.496996+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 745472 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:17.497137+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 745472 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:18.497335+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:19.497537+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 729088 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:20.497714+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 729088 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:21.497895+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 720896 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:22.498034+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 720896 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:23.498190+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 712704 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:24.498344+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 696320 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:25.498507+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:26.498618+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:27.498758+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:28.498895+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:29.499026+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:30.499153+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:31.499354+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:32.499516+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 663552 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:33.499689+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 663552 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:34.499837+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:35.500064+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:36.500242+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:37.500387+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 647168 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:38.500526+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 647168 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:39.500686+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 638976 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:40.500855+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 638976 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:41.501052+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 630784 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:42.501272+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 630784 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:43.501431+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 630784 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:44.501664+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:45.501891+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:46.502143+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 581632 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:47.502356+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 581632 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:48.502508+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 581632 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:49.502650+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 573440 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:50.502789+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 573440 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:51.502939+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 573440 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:52.503063+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 565248 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:53.503262+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 565248 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:54.503423+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 557056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:55.503647+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 557056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:56.503840+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 548864 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:57.503971+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 548864 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:58.504102+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 540672 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:59.504243+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 540672 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:00.504356+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:01.504501+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:02.504625+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:03.504759+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:04.504986+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 491520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:05.505214+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 491520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:06.505343+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 491520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:07.505465+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 491520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:08.505838+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 491520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:09.506005+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:10.506145+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:11.506294+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:12.506450+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:13.506610+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:14.506730+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:15.513220+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:16.513373+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:17.513509+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:18.513657+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:19.513785+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:20.513900+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:21.514011+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:22.514127+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:23.514243+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:24.514436+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:25.514610+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:26.514754+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:27.514915+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:28.515047+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:29.515664+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:30.516144+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:31.516351+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:32.516726+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:33.517490+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:34.517672+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:35.517852+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:36.518130+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:37.518466+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:38.518753+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:39.518905+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 450560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:40.519334+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 450560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:41.519474+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 450560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:42.519623+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 450560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:43.519763+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 450560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:44.519909+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 434176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:45.520066+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 434176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:46.520222+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 434176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:47.520374+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 434176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:48.520537+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 434176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:49.520628+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:50.520773+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:51.520929+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:52.521084+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:53.521269+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:54.521419+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:55.521616+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:56.521732+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:57.521937+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:58.522088+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:59.522229+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:00.522402+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 401408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:01.522647+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 401408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:02.522798+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 401408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:03.522931+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 401408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:04.523080+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:05.523265+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:06.523408+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:07.523544+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:08.523663+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:09.523814+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:10.523945+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:11.524170+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:12.524375+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:13.524508+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:14.524792+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:15.525025+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:16.525158+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:17.525384+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:18.529212+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:19.529342+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:20.529498+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:21.529639+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:22.530381+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:23.530827+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:24.531051+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:25.531290+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:26.531468+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:27.531632+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:28.531825+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:29.531994+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 23 19:11:09 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3096066787' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:30.532120+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:31.532319+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:32.532473+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:33.532616+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:34.532779+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:35.532952+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:36.533087+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:37.533198+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:38.533377+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:39.533604+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:40.533751+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:41.533920+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:42.534074+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:43.534306+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:44.534425+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:45.534627+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:46.534750+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:47.534876+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:48.535010+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:49.535174+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:50.535303+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:51.535424+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:52.535605+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:53.535755+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:54.535890+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:55.536084+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:56.536303+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:57.536490+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:58.536637+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:59.536787+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:00.536988+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:01.537148+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:02.537306+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:03.537518+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:04.537742+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:05.537983+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:06.538158+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:07.538293+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:08.538455+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:09.538593+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:10.538778+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:11.538950+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:12.539084+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:13.539237+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:14.539358+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:15.539531+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:16.539675+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:17.539844+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:18.540001+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:19.540134+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:20.540343+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:21.540471+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:22.540813+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:23.540950+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:24.541075+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:25.541240+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:26.541463+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:27.541654+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:28.541796+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:29.541930+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:30.542066+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:31.542196+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:32.542320+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:33.542439+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:34.542627+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:35.542801+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:36.542979+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:37.543142+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:38.543335+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:39.543497+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:40.543766+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:41.543909+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:42.544030+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:43.544881+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:44.545038+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:45.545201+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:46.545363+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:47.545543+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:48.545959+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:49.546134+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:50.546263+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:51.546461+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:52.546849+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:53.547014+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:54.547144+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:55.547337+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:56.547610+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:57.547783+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:58.547926+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:59.548086+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:00.548370+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:01.548509+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:02.548635+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:03.548756+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:04.548894+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:05.549055+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:06.549313+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:07.549433+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:08.549621+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc ms_handle_reset ms_handle_reset con 0x564373d84000
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4161918394
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: get_auth_request con 0x564373ac8400 auth_method 0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc handle_mgr_configure stats_period=5
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:09.549747+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:10.549872+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:11.549985+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:12.550099+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:13.550212+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:14.550357+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:15.550553+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:16.550727+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:17.550858+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:18.551354+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:19.551512+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:20.551634+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:21.551764+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:22.551896+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:23.552028+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:24.552176+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:25.552357+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:26.552512+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:27.552661+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:28.552815+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:29.552943+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:30.553094+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:31.553236+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:32.553360+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:33.553498+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:34.553617+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:35.553923+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:36.554100+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:37.554234+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:38.554390+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:39.554628+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:40.554807+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:41.554959+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:42.555101+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:43.555942+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:44.556600+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:45.557082+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:46.557297+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:47.558083+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:48.558788+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:49.559507+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:50.559734+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:51.560383+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:52.560552+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:53.560968+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:54.561413+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:55.561628+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:56.561764+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:57.561894+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:58.562055+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:59.562243+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:00.562453+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 827392 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:01.562703+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 827392 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:02.562957+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 827392 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: handle_auth_request added challenge on 0x564375ddf800
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 300.236267090s of 300.497070312s, submitted: 90
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:03.563156+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:04.563487+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:05.563697+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:06.563840+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:07.563974+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:08.564139+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:09.564363+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:10.564534+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:11.564669+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:12.564863+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:13.565021+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:14.565178+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 753664 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:15.565337+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 753664 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:16.565639+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 753664 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:17.565836+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 753664 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:18.566019+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 753664 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:19.566220+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:20.566341+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:21.566509+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:22.566683+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:23.566805+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:24.566953+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:25.567123+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:26.567297+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:27.567441+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:28.567646+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:29.567788+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:30.567979+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:31.568156+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:32.568301+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:33.568458+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:34.568634+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:35.568799+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:36.568930+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:37.569083+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:38.569340+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:39.569472+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:40.569611+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:41.569740+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:42.569885+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:43.570025+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:44.570166+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:45.570381+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:46.570580+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:47.570729+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:48.570847+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:49.571020+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:50.571194+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:51.571380+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:52.571512+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:53.571644+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:54.571799+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:55.571981+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:56.572101+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:57.572294+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:58.572470+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:59.572619+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:00.572765+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:01.572918+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:02.573073+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:03.573230+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:04.573337+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:05.573462+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:06.573602+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:07.573750+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:08.573875+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:09.574172+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:10.574307+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:11.574462+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:12.574611+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:13.574744+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:14.574922+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:15.575103+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:16.575244+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:17.575374+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:18.575487+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:19.575613+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:20.575734+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:21.575860+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:22.575972+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:23.576093+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:24.576250+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:25.576462+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:26.576649+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:27.576851+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:28.577040+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:29.577172+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:30.577346+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:31.577503+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:32.577635+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:33.577782+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:34.577919+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:35.578073+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:36.578218+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:37.578360+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:38.578481+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:39.578635+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:40.578774+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:41.578896+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:42.579030+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:43.579157+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:44.579266+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:45.579462+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:46.579648+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:47.580108+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:48.580291+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:49.580504+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:50.580770+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:51.580943+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:52.581057+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:53.581175+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:54.581328+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:55.581481+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:56.581636+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:57.581745+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:58.581875+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:59.581999+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 696320 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:00.582223+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 696320 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:01.582458+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:02.582626+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:03.582756+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:04.582946+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:05.583187+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:06.583322+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:07.583456+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:08.583585+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:09.583765+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:10.583975+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:11.584136+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:12.584263+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:13.584395+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:14.584573+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:15.584733+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:16.584897+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:17.585121+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:18.585272+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:19.585428+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:20.585625+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:21.585779+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:22.585917+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:23.586044+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:24.586229+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:25.586492+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:26.586695+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:27.586902+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:28.587097+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:29.587283+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:30.587439+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:31.587671+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:32.587856+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:33.587992+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:34.588187+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:35.588395+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:36.588523+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:37.588691+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:38.588871+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:39.589076+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:40.589269+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:41.589473+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:42.589714+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:43.589867+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:44.590006+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:45.590193+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:46.590318+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:47.590477+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:48.590662+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:49.590815+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:50.591019+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:51.591174+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:52.591381+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:53.591644+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:54.591817+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:55.592241+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:56.592386+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:57.592510+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:58.592692+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:59.592901+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 622592 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:00.593132+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:01.593321+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:02.593515+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:03.593706+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:04.593863+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:05.594042+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:06.594197+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:07.594427+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:08.594618+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:09.594807+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:10.595007+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:11.595228+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:12.595370+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:13.595530+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread fragmentation_score=0.000141 took=0.000048s
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:14.595682+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:15.595878+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:16.596095+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:17.596270+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:18.596419+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:19.596648+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 598016 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:20.596820+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 598016 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:21.596952+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 598016 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:22.597122+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 598016 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:23.597257+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 598016 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:24.597401+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:25.597592+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:26.597723+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:27.597862+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:28.598015+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:29.598470+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:30.599993+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:31.601075+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:32.601815+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:33.602394+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:34.602894+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:35.603359+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:36.603723+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:37.603982+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:38.604202+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:39.604422+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:40.604766+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:41.605100+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:42.605408+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:43.605697+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:44.605954+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:45.606210+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:46.606786+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:47.607034+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:48.607195+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:49.607439+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:50.607627+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:51.607848+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:52.608005+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:53.608149+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:54.608282+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:55.608474+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:56.608636+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:57.608795+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:58.608978+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:59.609139+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:00.609356+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 5664 writes, 24K keys, 5664 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5664 writes, 915 syncs, 6.19 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:01.609488+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:02.612240+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:03.612400+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:04.612616+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:05.612785+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:06.612959+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:07.613115+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:08.613281+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:09.613414+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:10.613640+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:11.613819+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:12.613988+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:13.614147+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:14.614336+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:15.614455+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:16.614612+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:17.614790+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:18.615009+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:19.615266+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:20.615469+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:21.615626+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:22.615775+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:23.616111+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:24.616383+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:25.616713+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:26.616905+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:27.617079+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:28.617349+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:29.617619+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:30.617774+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:31.617909+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:32.618083+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:33.618289+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:34.618676+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:35.618961+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:36.619166+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:37.619441+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:38.619667+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:39.619794+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:40.619973+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:41.620160+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:42.620340+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:43.620549+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:44.620863+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:45.621061+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:46.621252+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:47.621446+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:48.621659+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:49.621984+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:50.622145+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:51.622352+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:52.622509+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:53.622686+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:54.622858+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:55.623023+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:56.623193+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:57.623342+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:58.623533+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:59.623745+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:00.623879+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:01.624046+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 606208 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:02.624273+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 606208 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.751983643s of 299.792205811s, submitted: 24
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:03.624412+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 606208 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [0,0,0,1])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:04.624664+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 1564672 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [0,0,0,0,1])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:05.624844+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1556480 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:06.625314+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1556480 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:07.625653+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1556480 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:08.625856+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1556480 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:09.626054+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1556480 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:10.626295+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1556480 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:11.626458+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1556480 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:12.626668+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:13.626806+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:14.626963+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:15.627215+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:16.627352+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:17.627493+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:18.627621+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:19.627757+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:20.627932+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:21.628076+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:22.628227+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:23.628376+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:24.628550+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:25.628781+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:26.628947+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:27.629086+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:28.629344+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:29.629482+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:30.629673+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:31.629942+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:32.630208+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:33.630356+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:34.630496+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:35.630731+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:36.630928+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:37.631006+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:38.631133+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:39.631308+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:40.631448+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:41.631640+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:42.631806+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:43.631967+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:44.632157+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:45.632293+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:46.632448+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:47.632779+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:48.632943+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:49.633061+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:50.633229+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:51.633405+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:52.633613+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:53.633765+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:54.634027+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:55.634267+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:56.634430+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:57.634587+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:58.634671+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:59.634827+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:00.635001+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:01.635161+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:02.635346+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:03.635510+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:04.635681+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:05.635938+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:06.636107+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:07.636221+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:08.636400+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:09.636659+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:10.636784+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:11.636917+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:12.637012+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:13.637134+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:14.637266+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:15.637449+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:16.637616+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:17.637722+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:18.637872+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:19.638003+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:20.638137+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:21.638290+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:22.638415+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:23.638585+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:24.638797+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:25.638993+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:26.639150+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:27.639287+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:28.639447+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:29.639627+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:30.639758+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:31.639878+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:32.640034+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:33.640202+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:34.640344+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:35.640627+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:36.640870+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:37.641162+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:38.641332+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:39.641468+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:40.641629+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:41.641794+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:42.641934+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:43.642001+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:44.642168+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:45.642335+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:46.642462+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:47.642579+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:48.642962+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:49.643075+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:50.643164+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:51.643304+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:52.643416+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1490944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:53.643640+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1490944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:54.643795+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1490944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:55.643962+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1490944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:56.644125+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1490944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:57.644261+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1490944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:58.644427+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1490944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:59.644618+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1490944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:00.644840+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:01.645055+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:02.645425+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:03.645628+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:04.645869+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:05.646122+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:06.646309+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:07.646503+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:08.646716+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:09.646890+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1458176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:10.647029+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1458176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:11.647175+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1458176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:12.647323+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1458176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:13.647480+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1458176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:14.647851+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:15.648114+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:16.648355+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:17.648608+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:18.648811+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:19.649034+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:20.649254+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:21.649433+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:22.649590+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:23.649744+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:24.650003+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:25.650278+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:26.650521+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:27.650679+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:28.650822+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:29.651093+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1425408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:30.651316+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1425408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:31.651479+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1425408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:32.651698+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1425408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:33.651861+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1425408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:34.652074+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:35.652261+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:36.652430+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:37.652650+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:38.652878+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:39.653054+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:40.653272+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:41.653425+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:42.653553+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:43.653686+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:44.653816+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:45.653987+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1400832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:46.654191+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1400832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:47.654396+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1400832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:48.654670+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1400832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:49.654891+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1400832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:50.655121+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1400832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:51.655292+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1400832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:52.655523+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1400832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:53.655741+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1384448 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:54.655967+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1384448 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:55.656191+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1384448 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:56.656326+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1384448 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:57.656470+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1384448 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:58.656614+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: handle_auth_request added challenge on 0x564375ddfc00
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 1245184 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 175.503921509s of 176.320312500s, submitted: 90
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:59.656779+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1236992 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:00.656944+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb480c/0x16a000, compress 0x0/0x0/0x0, omap 0xf6b4, meta 0x2bc094c), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 17883136 heap: 92192768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:01.657129+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973073 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 17874944 heap: 92192768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:02.657274+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 121 ms_handle_reset con 0x564375ddfc00 session 0x564373aa7a40
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 17850368 heap: 92192768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: handle_auth_request added challenge on 0x5643756ef400
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:03.657478+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 24731648 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:04.657667+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 121 handle_osd_map epochs [121,122], i have 122, src has [1,122]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 122 ms_handle_reset con 0x5643756ef400 session 0x5643762716c0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 24698880 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:05.657851+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 24698880 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:06.658017+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023290 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fbeb4000/0x0/0x4ffc00000, data 0x10b9b8f/0x1174000, compress 0x0/0x0/0x0, omap 0x10659, meta 0x2bbf9a7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 24698880 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:07.658235+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 24698880 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:08.658418+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fbeb4000/0x0/0x4ffc00000, data 0x10b9b8f/0x1174000, compress 0x0/0x0/0x0, omap 0x10659, meta 0x2bbf9a7), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 24698880 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:09.658644+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 24690688 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:10.658798+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 24690688 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:11.659014+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023290 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 24690688 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.607306480s of 12.865227699s, submitted: 43
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:12.659197+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb3000/0x0/0x4ffc00000, data 0x10bb60e/0x1177000, compress 0x0/0x0/0x0, omap 0x1093b, meta 0x2bbf6c5), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:13.659382+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:14.659593+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:15.659779+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:16.659964+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025456 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:17.660202+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:18.660393+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb3000/0x0/0x4ffc00000, data 0x10bb60e/0x1177000, compress 0x0/0x0/0x0, omap 0x1093b, meta 0x2bbf6c5), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:19.660596+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:20.660793+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb3000/0x0/0x4ffc00000, data 0x10bb60e/0x1177000, compress 0x0/0x0/0x0, omap 0x1093b, meta 0x2bbf6c5), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:21.660992+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025456 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb3000/0x0/0x4ffc00000, data 0x10bb60e/0x1177000, compress 0x0/0x0/0x0, omap 0x1093b, meta 0x2bbf6c5), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:22.661192+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:23.661336+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: handle_auth_request added challenge on 0x564373ac8000
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.464639664s of 11.474132538s, submitted: 13
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 24551424 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:24.661532+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 24551424 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:25.661732+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Got map version 10
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 24633344 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:26.661909+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb2000/0x0/0x4ffc00000, data 0x10bb6a9/0x1178000, compress 0x0/0x0/0x0, omap 0x1093b, meta 0x2bbf6c5), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028120 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 24600576 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:27.662128+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 24600576 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:28.662281+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 24600576 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb2000/0x0/0x4ffc00000, data 0x10bb7df/0x117a000, compress 0x0/0x0/0x0, omap 0x1093b, meta 0x2bbf6c5), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:29.662464+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 24600576 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:30.662886+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 24600576 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:31.663059+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029668 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb2000/0x0/0x4ffc00000, data 0x10bb7df/0x117a000, compress 0x0/0x0/0x0, omap 0x1093b, meta 0x2bbf6c5), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 24600576 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:32.663207+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 24600576 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb2000/0x0/0x4ffc00000, data 0x10bb7df/0x117a000, compress 0x0/0x0/0x0, omap 0x1093b, meta 0x2bbf6c5), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:33.663393+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Got map version 11
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb2000/0x0/0x4ffc00000, data 0x10bb7df/0x117a000, compress 0x0/0x0/0x0, omap 0x1093b, meta 0x2bbf6c5), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 24535040 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:34.663669+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 24535040 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:35.663926+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: handle_auth_request added challenge on 0x56437445ec00
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.099026680s of 12.106583595s, submitted: 3
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 123 handle_osd_map epochs [123,124], i have 124, src has [1,124]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 24403968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:36.664101+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1031326 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 24403968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:37.664679+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 24403968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:38.664867+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fbeaf000/0x0/0x4ffc00000, data 0x10bd2ae/0x117b000, compress 0x0/0x0/0x0, omap 0x10bc6, meta 0x2bbf43a), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 24403968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:39.665000+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 24403968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:40.665136+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 24395776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:41.665266+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033510 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 24395776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:42.665411+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 24395776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:43.665593+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 24395776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:44.665754+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbeab000/0x0/0x4ffc00000, data 0x10bedc8/0x117f000, compress 0x0/0x0/0x0, omap 0x10eaa, meta 0x2bbf156), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbeab000/0x0/0x4ffc00000, data 0x10bedc8/0x117f000, compress 0x0/0x0/0x0, omap 0x10eaa, meta 0x2bbf156), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 24395776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:45.665979+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 24395776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:46.666142+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035202 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 24395776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:47.666309+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbeab000/0x0/0x4ffc00000, data 0x10bedc8/0x117f000, compress 0x0/0x0/0x0, omap 0x10eaa, meta 0x2bbf156), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 24395776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:48.666450+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 24395776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:49.666660+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.533925056s of 13.742151260s, submitted: 39
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 24387584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:50.666832+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 24387584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:51.666961+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033764 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 24387584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:52.667120+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 24387584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:53.667254+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbead000/0x0/0x4ffc00000, data 0x10bed2d/0x117e000, compress 0x0/0x0/0x0, omap 0x10eaa, meta 0x2bbf156), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 24387584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:54.667417+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24379392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:55.667608+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24379392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:56.667840+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbead000/0x0/0x4ffc00000, data 0x10bed2d/0x117e000, compress 0x0/0x0/0x0, omap 0x10eaa, meta 0x2bbf156), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033764 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24379392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:57.668012+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24379392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:58.668170+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbead000/0x0/0x4ffc00000, data 0x10bed2d/0x117e000, compress 0x0/0x0/0x0, omap 0x10eaa, meta 0x2bbf156), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24379392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:59.668353+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24379392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:00.668613+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbead000/0x0/0x4ffc00000, data 0x10bed2d/0x117e000, compress 0x0/0x0/0x0, omap 0x10eaa, meta 0x2bbf156), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24379392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:01.668794+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033764 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24379392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:02.668930+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24379392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:03.669180+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.992928505s of 13.995832443s, submitted: 1
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbead000/0x0/0x4ffc00000, data 0x10bed2d/0x117e000, compress 0x0/0x0/0x0, omap 0x10eaa, meta 0x2bbf156), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:04.669388+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:05.669622+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:06.669814+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035694 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:07.669963+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbeaa000/0x0/0x4ffc00000, data 0x10c0897/0x1180000, compress 0x0/0x0/0x0, omap 0x11135, meta 0x2bbeecb), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:08.670150+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:09.670311+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:10.670531+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:11.670781+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038468 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:12.670960+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:13.671121+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:14.671275+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:15.671455+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:16.671631+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038468 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:17.671784+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:18.671950+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:19.672163+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:20.672317+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:21.672500+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038468 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:22.672683+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:23.672892+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:24.673203+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:25.673398+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:26.673725+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038468 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:27.673847+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:28.673995+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:29.674178+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:30.674412+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:31.674650+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038468 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:32.674818+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:33.674988+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:34.675175+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:35.675335+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 127 handle_osd_map epochs [127,128], i have 128, src has [1,128]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.546075821s of 32.449630737s, submitted: 38
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:36.675505+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041242 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fbea4000/0x0/0x4ffc00000, data 0x10c3f7b/0x1186000, compress 0x0/0x0/0x0, omap 0x116bd, meta 0x2bbe943), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:37.675676+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:38.675844+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fbe9f000/0x0/0x4ffc00000, data 0x10c5ba0/0x1189000, compress 0x0/0x0/0x0, omap 0x11948, meta 0x2bbe6b8), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:39.676046+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:40.676201+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:41.676352+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044480 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:42.676517+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:43.676667+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:44.676805+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fbea2000/0x0/0x4ffc00000, data 0x10c5c3b/0x118a000, compress 0x0/0x0/0x0, omap 0x11948, meta 0x2bbe6b8), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:45.676968+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 129 handle_osd_map epochs [129,130], i have 130, src has [1,130]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.761423111s of 10.037869453s, submitted: 73
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:46.677109+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 23314432 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046790 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:47.677262+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 23314432 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fbe9e000/0x0/0x4ffc00000, data 0x10c77e5/0x118c000, compress 0x0/0x0/0x0, omap 0x11bd3, meta 0x2bbe42d), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 130 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:48.677406+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 23273472 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:49.677697+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 23273472 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:50.677835+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 23248896 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fbe9a000/0x0/0x4ffc00000, data 0x10c952d/0x1190000, compress 0x0/0x0/0x0, omap 0x11f55, meta 0x2bbe0ab), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:51.677951+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 23248896 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054988 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:52.678115+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 23240704 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:53.678264+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 23224320 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fbe92000/0x0/0x4ffc00000, data 0x10ccdff/0x1196000, compress 0x0/0x0/0x0, omap 0x12557, meta 0x2bbdaa9), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:54.678390+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:55.678678+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fbe92000/0x0/0x4ffc00000, data 0x10ccdff/0x1196000, compress 0x0/0x0/0x0, omap 0x12557, meta 0x2bbdaa9), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:56.678828+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058226 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:57.678960+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fbe92000/0x0/0x4ffc00000, data 0x10ccdff/0x1196000, compress 0x0/0x0/0x0, omap 0x12557, meta 0x2bbdaa9), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.772963524s of 12.161247253s, submitted: 121
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:58.679113+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 23175168 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:59.679319+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 23175168 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:00.679429+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 23175168 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fbe91000/0x0/0x4ffc00000, data 0x10cce9a/0x1197000, compress 0x0/0x0/0x0, omap 0x12557, meta 0x2bbdaa9), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:01.679645+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061940 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:02.679801+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe90000/0x0/0x4ffc00000, data 0x10ce959/0x119a000, compress 0x0/0x0/0x0, omap 0x12974, meta 0x2bbd68c), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:03.679978+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:04.680158+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:05.680359+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:06.680716+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061940 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:07.680879+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:08.681031+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe90000/0x0/0x4ffc00000, data 0x10ce959/0x119a000, compress 0x0/0x0/0x0, omap 0x12974, meta 0x2bbd68c), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:09.681172+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:10.681359+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:11.681599+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe90000/0x0/0x4ffc00000, data 0x10ce959/0x119a000, compress 0x0/0x0/0x0, omap 0x12974, meta 0x2bbd68c), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061940 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:12.681798+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:13.681971+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.065046310s of 16.300659180s, submitted: 13
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:14.682119+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:15.682278+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe8f000/0x0/0x4ffc00000, data 0x10ce9f4/0x119b000, compress 0x0/0x0/0x0, omap 0x12974, meta 0x2bbd68c), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:16.682436+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063632 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:17.682640+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:18.682849+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:19.683036+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:20.683180+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:21.683307+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe8f000/0x0/0x4ffc00000, data 0x10ceb2a/0x119d000, compress 0x0/0x0/0x0, omap 0x12974, meta 0x2bbd68c), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe8f000/0x0/0x4ffc00000, data 0x10ceb2a/0x119d000, compress 0x0/0x0/0x0, omap 0x12974, meta 0x2bbd68c), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063726 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:22.683447+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe91000/0x0/0x4ffc00000, data 0x10ce9f4/0x119b000, compress 0x0/0x0/0x0, omap 0x12974, meta 0x2bbd68c), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:23.683604+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:24.683741+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:25.683911+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:26.684053+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.181322098s of 12.437225342s, submitted: 5
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067220 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:27.684221+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fbe8c000/0x0/0x4ffc00000, data 0x10d05f9/0x119e000, compress 0x0/0x0/0x0, omap 0x12bff, meta 0x2bbd401), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:28.684451+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 23150592 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:29.684646+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 23150592 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:30.684841+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 23150592 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:31.684987+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069404 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:32.685157+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe89000/0x0/0x4ffc00000, data 0x10d2052/0x11a1000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:33.685301+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:34.685473+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:35.685648+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe89000/0x0/0x4ffc00000, data 0x10d2052/0x11a1000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:36.685816+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071096 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:37.685998+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:38.686129+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:39.686257+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:40.686387+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.552777290s of 13.626544952s, submitted: 38
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe89000/0x0/0x4ffc00000, data 0x10d2052/0x11a1000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:41.686546+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070392 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:42.686745+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:43.686873+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8a000/0x0/0x4ffc00000, data 0x10d200d/0x11a1000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:44.687025+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:45.687209+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8a000/0x0/0x4ffc00000, data 0x10d200d/0x11a1000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:46.687360+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 23191552 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072084 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:47.687464+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe89000/0x0/0x4ffc00000, data 0x10d20a8/0x11a2000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 23191552 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:48.687622+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:49.687810+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:50.687987+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:51.688109+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073616 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:52.688253+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.846646309s of 12.063707352s, submitted: 6
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:53.688455+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe89000/0x0/0x4ffc00000, data 0x10d2143/0x11a3000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:54.688649+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:55.688820+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:56.689032+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071780 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:57.689240+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8a000/0x0/0x4ffc00000, data 0x10d20a8/0x11a2000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:58.689412+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:59.689576+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:00.689783+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:01.689975+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8a000/0x0/0x4ffc00000, data 0x10d20a8/0x11a2000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 23150592 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071780 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:02.690205+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.153181076s of 10.157307625s, submitted: 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 23150592 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:03.690382+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 23150592 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:04.690528+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8a000/0x0/0x4ffc00000, data 0x10d20a8/0x11a2000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 23150592 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:05.690717+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8a000/0x0/0x4ffc00000, data 0x10d20a8/0x11a2000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 23150592 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:06.690913+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 23150592 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072882 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8a000/0x0/0x4ffc00000, data 0x10d20a8/0x11a2000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:07.691041+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 23142400 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:08.691194+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 23142400 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:09.691420+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 23142400 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:10.691618+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 23134208 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:11.691795+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 23134208 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072148 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:12.691966+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 23134208 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8b000/0x0/0x4ffc00000, data 0x10d200d/0x11a1000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:13.692158+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 23134208 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8b000/0x0/0x4ffc00000, data 0x10d200d/0x11a1000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:14.692286+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: handle_auth_request added challenge on 0x56437673bc00
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.712426186s of 11.808559418s, submitted: 5
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 22986752 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: handle_auth_request added challenge on 0x56437673b000
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:15.692476+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Got map version 12
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 22790144 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:16.692614+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 22790144 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076490 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:17.692738+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 22790144 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:18.692924+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 22781952 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:19.693107+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8b000/0x0/0x4ffc00000, data 0x10d1fdd/0x11a0000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 22781952 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:20.693281+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 22781952 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:21.693435+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 22781952 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078516 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:22.693586+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 22781952 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:23.693759+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe88000/0x0/0x4ffc00000, data 0x10d3b47/0x11a2000, compress 0x0/0x0/0x0, omap 0x1318d, meta 0x2bbce73), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:24.693950+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe88000/0x0/0x4ffc00000, data 0x10d3b47/0x11a2000, compress 0x0/0x0/0x0, omap 0x1318d, meta 0x2bbce73), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:25.694132+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:26.694277+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe88000/0x0/0x4ffc00000, data 0x10d3b47/0x11a2000, compress 0x0/0x0/0x0, omap 0x1318d, meta 0x2bbce73), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077336 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:27.694420+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:28.694581+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.803647995s of 14.063410759s, submitted: 34
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:29.694716+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:30.694879+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 137 handle_osd_map epochs [137,138], i have 138, src has [1,138]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:31.695031+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081212 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:32.695219+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe86000/0x0/0x4ffc00000, data 0x10d552b/0x11a4000, compress 0x0/0x0/0x0, omap 0x13487, meta 0x2bbcb79), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:33.695423+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:34.695613+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe85000/0x0/0x4ffc00000, data 0x10d55c6/0x11a5000, compress 0x0/0x0/0x0, omap 0x13487, meta 0x2bbcb79), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:35.695793+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 22732800 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:36.696036+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 22724608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe85000/0x0/0x4ffc00000, data 0x10d5661/0x11a6000, compress 0x0/0x0/0x0, omap 0x13487, meta 0x2bbcb79), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083014 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:37.696224+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 22724608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:38.696437+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 22724608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:39.696647+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 22724608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe85000/0x0/0x4ffc00000, data 0x10d5661/0x11a6000, compress 0x0/0x0/0x0, omap 0x13487, meta 0x2bbcb79), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:40.696928+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.989875793s of 12.011968613s, submitted: 18
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 22863872 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:41.697204+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 22913024 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085534 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:42.697397+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fbe82000/0x0/0x4ffc00000, data 0x10d71fb/0x11a8000, compress 0x0/0x0/0x0, omap 0x13712, meta 0x2bbc8ee), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 22904832 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:43.697640+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 22904832 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:44.697852+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 22904832 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:45.698078+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 22904832 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:46.698231+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 22888448 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086026 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:47.698423+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 22888448 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:48.698596+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe80000/0x0/0x4ffc00000, data 0x10d8d75/0x11aa000, compress 0x0/0x0/0x0, omap 0x1399d, meta 0x2bbc663), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 22872064 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:49.698793+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 22872064 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:50.698971+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 22872064 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:51.699115+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 22872064 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.631030083s of 11.823550224s, submitted: 51
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088800 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:52.699293+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 22863872 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:53.699475+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fbe7d000/0x0/0x4ffc00000, data 0x10da830/0x11ad000, compress 0x0/0x0/0x0, omap 0x13d1c, meta 0x2bbc2e4), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 22863872 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:54.699682+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 22839296 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:55.699913+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 22839296 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7d000/0x0/0x4ffc00000, data 0x10da8f9/0x11ae000, compress 0x0/0x0/0x0, omap 0x13d1c, meta 0x2bbc2e4), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:56.700176+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 22839296 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092546 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:57.700379+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 22831104 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:58.700544+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc376/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 22831104 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:59.700778+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 22822912 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:00.700912+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:01.701083+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:02.701298+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094110 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.159491539s of 10.234383583s, submitted: 34
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe79000/0x0/0x4ffc00000, data 0x10dc43e/0x11b2000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:03.701685+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:04.701827+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:05.702016+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:06.702181+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:07.702346+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093504 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc2af/0x11b0000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:08.702480+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:09.702675+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:10.703072+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:11.703306+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:12.703508+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093504 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc34a/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:13.703621+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:14.703755+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc34a/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.524249077s of 12.662443161s, submitted: 5
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:15.703982+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 22781952 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:16.704165+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 22781952 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:17.704427+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095068 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 22781952 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:18.704625+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe79000/0x0/0x4ffc00000, data 0x10dc412/0x11b2000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:19.704810+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:20.704951+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 22765568 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:21.705127+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10dc34a/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10dc34a/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 22749184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:22.705271+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094334 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 22749184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:23.705501+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 22749184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10dc34a/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:24.705711+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 22749184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:25.705973+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.469438553s of 10.483627319s, submitted: 6
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe79000/0x0/0x4ffc00000, data 0x10dc413/0x11b2000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 22749184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:26.706146+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 22749184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:27.706287+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095068 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 22749184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:28.706478+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 22749184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:29.706630+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 22749184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 ms_handle_reset con 0x56437673bc00 session 0x564375444e00
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 ms_handle_reset con 0x56437673b000 session 0x5643748f7a40
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:30.706772+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 22413312 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:31.706958+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10dc413/0x11b2000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 22413312 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Got map version 13
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:32.707124+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094908 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 22347776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:33.707264+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10dc413/0x11b2000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 22347776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:34.707399+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 22347776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:35.707815+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 22347776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:36.708087+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10dc413/0x11b2000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 22347776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:37.708226+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094908 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 22347776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:38.708315+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 22347776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:39.708713+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.680962563s of 13.837501526s, submitted: 183
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:40.708847+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:41.709007+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc378/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:42.709171+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094318 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:43.709255+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:44.709372+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc378/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:45.709526+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:46.709663+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:47.709841+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094318 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:48.710011+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc378/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:49.710170+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc378/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.737087250s of 10.740295410s, submitted: 2
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:50.710243+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:51.710364+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc376/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:52.710541+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093744 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:53.710694+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:54.710829+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:55.711005+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0x10dc2af/0x11b0000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:56.711144+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:57.711289+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093728 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:58.711437+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0x10dc2af/0x11b0000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:59.711625+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:00.711813+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.274223328s of 10.329898834s, submitted: 4
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 22323200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:01.712089+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10dc377/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 22323200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:02.712205+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095436 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 22323200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:03.712343+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 22323200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10dc377/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:04.712526+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 22315008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:05.712793+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10dc377/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc375/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 22249472 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:06.712951+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 22249472 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:07.713095+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096896 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 22249472 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:08.713220+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe48000/0x0/0x4ffc00000, data 0x110fa49/0x11e4000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 80666624 unmapped: 19922944 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:09.713368+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 19726336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:10.713678+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.950484276s of 10.159562111s, submitted: 31
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 19972096 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:11.713833+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 19972096 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:12.713947+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102510 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe30000/0x0/0x4ffc00000, data 0x11269ac/0x11fc000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 19750912 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:13.714085+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 19595264 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:14.714189+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 19595264 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:15.714365+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 19038208 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:16.714531+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 19038208 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:17.714698+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104758 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 16670720 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:18.714850+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fabf8000/0x0/0x4ffc00000, data 0x11bfae8/0x1294000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 16359424 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:19.714964+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 16351232 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:20.715112+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.725925446s of 10.040389061s, submitted: 39
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 16449536 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:21.715259+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 16498688 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fabd8000/0x0/0x4ffc00000, data 0x11df53a/0x12b4000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [0,0,0,2])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:22.715385+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115474 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 16490496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:23.715524+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 16490496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:24.715675+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 16244736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:25.715860+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 16392192 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:26.716023+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fab76000/0x0/0x4ffc00000, data 0x1240927/0x1316000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 16392192 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:27.716148+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114086 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 16187392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:28.716278+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 16187392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:29.716418+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 15089664 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:30.716651+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fab32000/0x0/0x4ffc00000, data 0x128579d/0x135a000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,3])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.163474083s of 10.421551704s, submitted: 53
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 14729216 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:31.716945+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fab32000/0x0/0x4ffc00000, data 0x128579d/0x135a000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 15384576 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:32.717171+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119124 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fab27000/0x0/0x4ffc00000, data 0x129078c/0x1365000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 15278080 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:33.717379+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 15425536 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:34.717722+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4faaf0000/0x0/0x4ffc00000, data 0x12c741b/0x139c000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4faaf0000/0x0/0x4ffc00000, data 0x12c741b/0x139c000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 15425536 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:35.717994+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 15417344 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:36.718166+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 15237120 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:37.718358+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119860 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 15187968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:38.718611+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 15187968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:39.718794+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 14974976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:40.719085+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4faab8000/0x0/0x4ffc00000, data 0x12ffaf2/0x13d4000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 87187456 unmapped: 13402112 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:41.719346+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.934916496s of 10.981097221s, submitted: 51
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 87187456 unmapped: 13402112 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:42.719596+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127704 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 13131776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:43.719798+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 87072768 unmapped: 13516800 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:44.720012+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 87072768 unmapped: 13516800 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:45.720213+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4faa47000/0x0/0x4ffc00000, data 0x137036b/0x1445000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 13664256 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:46.720383+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 13664256 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:47.720523+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135400 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 87097344 unmapped: 13492224 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:48.720695+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa9f2000/0x0/0x4ffc00000, data 0x13c3c2f/0x1499000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 13647872 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:49.720872+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 13647872 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:50.721059+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 12394496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:51.721219+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 12115968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:52.721399+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa9a2000/0x0/0x4ffc00000, data 0x14151a3/0x14ea000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139232 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 12115968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:53.721522+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 12115968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:54.721710+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.313160896s of 12.612329483s, submitted: 62
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 87998464 unmapped: 12591104 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:55.721918+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 12509184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:56.722079+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 12509184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:57.722228+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138590 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 12263424 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:58.722360+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa96f000/0x0/0x4ffc00000, data 0x1448b04/0x151d000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 12263424 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:59.722613+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa950000/0x0/0x4ffc00000, data 0x14675ca/0x153c000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 12263424 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:00.722764+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 7449 writes, 29K keys, 7449 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7448 writes, 1611 syncs, 4.62 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1785 writes, 5185 keys, 1785 commit groups, 1.0 writes per commit group, ingest: 6.20 MB, 0.01 MB/s
                                           Interval WAL: 1784 writes, 696 syncs, 2.56 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88489984 unmapped: 12099584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:01.722921+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 12058624 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:02.723114+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144750 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88604672 unmapped: 11984896 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:03.723292+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa913000/0x0/0x4ffc00000, data 0x14a2849/0x1578000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 11837440 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:04.723417+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.493882179s of 10.143390656s, submitted: 48
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 12263424 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:05.723673+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89169920 unmapped: 11419648 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:06.723835+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:07.723992+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 11337728 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144574 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:08.724132+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 11337728 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc ms_handle_reset ms_handle_reset con 0x564373ac8400
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4161918394
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: get_auth_request con 0x564376671800 auth_method 0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc handle_mgr_configure stats_period=5
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:09.724332+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89382912 unmapped: 11206656 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa8da000/0x0/0x4ffc00000, data 0x14dc410/0x15b2000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:10.724533+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:11.724724+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:12.731632+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147494 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa8d8000/0x0/0x4ffc00000, data 0x14dc51b/0x15b4000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:13.731829+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:14.731968+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.332037926s of 10.297814369s, submitted: 22
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:15.732193+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:16.732320+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa8d8000/0x0/0x4ffc00000, data 0x14dc547/0x15b4000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 142 handle_osd_map epochs [142,143], i have 143, src has [1,143]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:17.732465+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151418 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:18.732682+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:19.732842+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:20.733064+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa8d2000/0x0/0x4ffc00000, data 0x14de14e/0x15b7000, compress 0x0/0x0/0x0, omap 0x142ac, meta 0x3d5bd54), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:21.733211+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:22.733348+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150108 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:23.733509+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa8d6000/0x0/0x4ffc00000, data 0x14de0b1/0x15b6000, compress 0x0/0x0/0x0, omap 0x142ac, meta 0x3d5bd54), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:24.733708+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:25.733893+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.173578262s of 10.346817970s, submitted: 30
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:26.734022+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:27.734170+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151320 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:28.734291+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:29.734421+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d3000/0x0/0x4ffc00000, data 0x14df9ce/0x15b7000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:30.734639+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:31.734793+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89513984 unmapped: 11075584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:32.734922+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89513984 unmapped: 11075584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d2000/0x0/0x4ffc00000, data 0x14dfb0c/0x15b9000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154000 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:33.735056+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 11059200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:34.735194+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 11051008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d2000/0x0/0x4ffc00000, data 0x14dfb0c/0x15b9000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:35.735386+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 11051008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:36.735622+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.757122040s of 10.806098938s, submitted: 20
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89563136 unmapped: 11026432 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:37.735843+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 11124736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d2000/0x0/0x4ffc00000, data 0x14dfac5/0x15b9000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154000 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:38.736248+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 11124736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:39.736439+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 11124736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:40.736639+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 11124736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:41.736805+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 11124736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d4000/0x0/0x4ffc00000, data 0x14dfa69/0x15b8000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:42.736993+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 11124736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153250 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:43.737159+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 11124736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:44.737317+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 11124736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d1000/0x0/0x4ffc00000, data 0x14dfba7/0x15ba000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: handle_auth_request added challenge on 0x5643756ef400
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:45.737541+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89481216 unmapped: 11108352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:46.737783+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89481216 unmapped: 11108352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Got map version 14
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:47.737973+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 11091968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.172966957s of 11.213060379s, submitted: 14
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157480 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:48.738137+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 11091968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:49.738289+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:50.738485+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d1000/0x0/0x4ffc00000, data 0x14dfb60/0x15ba000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:51.738620+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:52.738837+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154480 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:53.739009+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:54.739165+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:55.739363+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d2000/0x0/0x4ffc00000, data 0x14df9ce/0x15b7000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:56.739545+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:57.739782+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.465863228s of 10.024806023s, submitted: 10
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155166 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:58.739920+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d4000/0x0/0x4ffc00000, data 0x14dfa69/0x15b8000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:59.740050+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89522176 unmapped: 11067392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:00.740192+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89522176 unmapped: 11067392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:01.740359+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89440256 unmapped: 11149312 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:02.743619+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89440256 unmapped: 11149312 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155166 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:03.743788+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89448448 unmapped: 11141120 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d4000/0x0/0x4ffc00000, data 0x14dfa69/0x15b8000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:04.743923+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 11059200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:05.744084+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 11059200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:06.744234+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89554944 unmapped: 11034624 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:07.744366+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d5000/0x0/0x4ffc00000, data 0x14df9ce/0x15b7000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [0,0,0,0,0,1])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89563136 unmapped: 11026432 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.508973122s of 10.182012558s, submitted: 58
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153474 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:08.744523+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 11001856 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:09.744698+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 11001856 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:10.744843+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 11001856 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:11.745021+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90480640 unmapped: 10108928 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:12.745127+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90480640 unmapped: 10108928 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153474 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d5000/0x0/0x4ffc00000, data 0x14df9ce/0x15b7000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:13.745276+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90488832 unmapped: 10100736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:14.745432+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90488832 unmapped: 10100736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:15.745609+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90488832 unmapped: 10100736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:16.745763+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90488832 unmapped: 10100736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:17.745930+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 10092544 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d5000/0x0/0x4ffc00000, data 0x14df9ce/0x15b7000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153474 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:18.746092+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 10092544 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:19.746265+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 10092544 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:20.746398+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 10092544 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:21.746526+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 10092544 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:22.746663+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 10092544 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.560441017s of 15.103803635s, submitted: 58
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152884 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d6000/0x0/0x4ffc00000, data 0x14df933/0x15b6000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:23.746800+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 10092544 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:24.746954+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 10092544 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:25.747116+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:26.747254+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:27.747400+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152884 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:28.747610+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d6000/0x0/0x4ffc00000, data 0x14df933/0x15b6000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:29.747780+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:30.747924+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:31.748083+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:32.748223+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152884 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d6000/0x0/0x4ffc00000, data 0x14df933/0x15b6000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:33.748723+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:34.748879+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.775199890s of 11.934508324s, submitted: 1
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:35.749052+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d6000/0x0/0x4ffc00000, data 0x14df933/0x15b6000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 10076160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:36.749218+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 10076160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:37.749399+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 10076160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156378 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:38.749674+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa8d1000/0x0/0x4ffc00000, data 0x14e1538/0x15b9000, compress 0x0/0x0/0x0, omap 0x1483c, meta 0x3d5b7c4), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 10076160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa8d1000/0x0/0x4ffc00000, data 0x14e1538/0x15b9000, compress 0x0/0x0/0x0, omap 0x1483c, meta 0x3d5b7c4), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:39.749865+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 10076160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:40.750063+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 10067968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:41.750198+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 10067968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:42.750406+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa8d1000/0x0/0x4ffc00000, data 0x14e1538/0x15b9000, compress 0x0/0x0/0x0, omap 0x1483c, meta 0x3d5b7c4), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159152 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:43.750638+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:44.750794+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa8ce000/0x0/0x4ffc00000, data 0x14e2fb7/0x15bc000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x3d5b482), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:45.750977+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.054538727s of 11.160003662s, submitted: 37
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:46.751177+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:47.751359+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160844 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:48.751551+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa8cd000/0x0/0x4ffc00000, data 0x14e3052/0x15bd000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x3d5b482), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:49.751755+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:50.751906+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:51.752096+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:52.752256+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160844 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:53.752436+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa8cd000/0x0/0x4ffc00000, data 0x14e3052/0x15bd000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x3d5b482), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:54.752669+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:55.752919+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: handle_auth_request added challenge on 0x56437673b800
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 10043392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:56.753103+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 10043392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:57.753261+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.760693550s of 11.774554253s, submitted: 5
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 10043392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165734 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:58.753404+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 10043392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:59.753639+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa8cd000/0x0/0x4ffc00000, data 0x14e325d/0x15bf000, compress 0x0/0x0/0x0, omap 0x14c1c, meta 0x3d5b3e4), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 10043392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:00.753830+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90554368 unmapped: 10035200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:01.753981+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90554368 unmapped: 10035200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa8cc000/0x0/0x4ffc00000, data 0x14e32f8/0x15c0000, compress 0x0/0x0/0x0, omap 0x14c1c, meta 0x3d5b3e4), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:02.754139+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90554368 unmapped: 10035200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165144 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:03.754301+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa8cd000/0x0/0x4ffc00000, data 0x14e30ed/0x15be000, compress 0x0/0x0/0x0, omap 0x14c1c, meta 0x3d5b3e4), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90554368 unmapped: 10035200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa8cd000/0x0/0x4ffc00000, data 0x14e30ed/0x15be000, compress 0x0/0x0/0x0, omap 0x14c1c, meta 0x3d5b3e4), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:04.754508+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90554368 unmapped: 10035200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:05.754747+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90562560 unmapped: 10027008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa8cd000/0x0/0x4ffc00000, data 0x14e3052/0x15bd000, compress 0x0/0x0/0x0, omap 0x14c1c, meta 0x3d5b3e4), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:06.754868+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90562560 unmapped: 10027008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:07.755008+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90562560 unmapped: 10027008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1164426 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:08.755162+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90562560 unmapped: 10027008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:09.755298+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90562560 unmapped: 10027008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:10.755444+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa8cd000/0x0/0x4ffc00000, data 0x14e3052/0x15bd000, compress 0x0/0x0/0x0, omap 0x14c1c, meta 0x3d5b3e4), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90562560 unmapped: 10027008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:11.755642+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.751629829s of 13.765338898s, submitted: 5
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90570752 unmapped: 10018816 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:12.755810+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90570752 unmapped: 10018816 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166946 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:13.755970+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90570752 unmapped: 10018816 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:14.756103+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90570752 unmapped: 10018816 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:15.756262+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa8ca000/0x0/0x4ffc00000, data 0x14e4c57/0x15c0000, compress 0x0/0x0/0x0, omap 0x14ea7, meta 0x3d5b159), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90570752 unmapped: 10018816 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:16.756441+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90570752 unmapped: 10018816 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:17.756610+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90570752 unmapped: 10018816 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 147 handle_osd_map epochs [147,148], i have 148, src has [1,148]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169720 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:18.756789+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa8ca000/0x0/0x4ffc00000, data 0x14e4c57/0x15c0000, compress 0x0/0x0/0x0, omap 0x14ea7, meta 0x3d5b159), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90578944 unmapped: 10010624 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa8c7000/0x0/0x4ffc00000, data 0x14e66d6/0x15c3000, compress 0x0/0x0/0x0, omap 0x15170, meta 0x3d5ae90), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:19.756979+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90578944 unmapped: 10010624 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:20.757157+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90578944 unmapped: 10010624 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:21.758178+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90578944 unmapped: 10010624 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:22.758405+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90587136 unmapped: 10002432 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171412 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:23.759391+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa8c6000/0x0/0x4ffc00000, data 0x14e6771/0x15c4000, compress 0x0/0x0/0x0, omap 0x15170, meta 0x3d5ae90), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90587136 unmapped: 10002432 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:24.759543+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90587136 unmapped: 10002432 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:25.760396+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90587136 unmapped: 10002432 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 148 handle_osd_map epochs [148,149], i have 149, src has [1,149]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.325739861s of 14.450867653s, submitted: 36
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:26.760663+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90603520 unmapped: 9986048 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:27.760957+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90603520 unmapped: 9986048 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:28.761174+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1175144 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa8c4000/0x0/0x4ffc00000, data 0x14e82db/0x15c6000, compress 0x0/0x0/0x0, omap 0x153fb, meta 0x3d5ac05), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90603520 unmapped: 9986048 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:29.761412+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90603520 unmapped: 9986048 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:30.761665+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa8c3000/0x0/0x4ffc00000, data 0x14e8376/0x15c7000, compress 0x0/0x0/0x0, omap 0x153fb, meta 0x3d5ac05), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90603520 unmapped: 9986048 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:31.761905+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90603520 unmapped: 9986048 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:32.762185+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 149 handle_osd_map epochs [149,150], i have 150, src has [1,150]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90619904 unmapped: 9969664 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:33.762365+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177774 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90619904 unmapped: 9969664 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0x14e9df5/0x15ca000, compress 0x0/0x0/0x0, omap 0x15702, meta 0x3d5a8fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:34.762487+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90628096 unmapped: 9961472 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:35.762715+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90636288 unmapped: 9953280 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:36.762973+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90636288 unmapped: 9953280 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:37.763202+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.823903084s of 11.909013748s, submitted: 42
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:38.763366+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179848 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8c1000/0x0/0x4ffc00000, data 0x14e9e90/0x15cb000, compress 0x0/0x0/0x0, omap 0x15702, meta 0x3d5a8fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:39.763601+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:40.763758+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8c1000/0x0/0x4ffc00000, data 0x14e9e90/0x15cb000, compress 0x0/0x0/0x0, omap 0x15702, meta 0x3d5a8fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:41.763964+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:42.764144+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8c2000/0x0/0x4ffc00000, data 0x14e9df5/0x15ca000, compress 0x0/0x0/0x0, omap 0x15702, meta 0x3d5a8fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:43.764295+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178012 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:44.764465+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:45.764660+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:46.764778+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8c1000/0x0/0x4ffc00000, data 0x14e9e90/0x15cb000, compress 0x0/0x0/0x0, omap 0x15702, meta 0x3d5a8fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:47.764950+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8c1000/0x0/0x4ffc00000, data 0x14e9e90/0x15cb000, compress 0x0/0x0/0x0, omap 0x15702, meta 0x3d5a8fe), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 8896512 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:48.765139+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181260 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8c1000/0x0/0x4ffc00000, data 0x14e9e90/0x15cb000, compress 0x0/0x0/0x0, omap 0x15752, meta 0x3d5a8ae), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 8896512 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:49.765286+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 8896512 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:50.765437+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.961210251s of 12.982180595s, submitted: 4
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91701248 unmapped: 8888320 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:51.765603+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91701248 unmapped: 8888320 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:52.765747+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa8bc000/0x0/0x4ffc00000, data 0x14eba95/0x15ce000, compress 0x0/0x0/0x0, omap 0x159dd, meta 0x3d5a623), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91701248 unmapped: 8888320 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:53.765873+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186302 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91701248 unmapped: 8888320 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:54.766082+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91701248 unmapped: 8888320 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa8bc000/0x0/0x4ffc00000, data 0x14ebbcb/0x15d0000, compress 0x0/0x0/0x0, omap 0x159dd, meta 0x3d5a623), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:55.766279+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91701248 unmapped: 8888320 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:56.766465+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa8bd000/0x0/0x4ffc00000, data 0x14ebb30/0x15cf000, compress 0x0/0x0/0x0, omap 0x159dd, meta 0x3d5a623), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91701248 unmapped: 8888320 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:57.766658+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91717632 unmapped: 8871936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:58.766904+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191582 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91725824 unmapped: 8863744 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:59.767108+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa8b4000/0x0/0x4ffc00000, data 0x14ef24f/0x15d6000, compress 0x0/0x0/0x0, omap 0x15f31, meta 0x3d5a0cf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91734016 unmapped: 8855552 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:00.769606+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91734016 unmapped: 8855552 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:01.769726+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91734016 unmapped: 8855552 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:02.769855+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Got map version 15
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91734016 unmapped: 8855552 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:03.770187+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193622 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.605374336s of 12.818570137s, submitted: 65
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa8b5000/0x0/0x4ffc00000, data 0x14ef1b4/0x15d5000, compress 0x0/0x0/0x0, omap 0x15f31, meta 0x3d5a0cf), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91750400 unmapped: 8839168 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:04.770471+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91750400 unmapped: 8839168 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:05.770657+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91758592 unmapped: 8830976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:06.770782+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 153 handle_osd_map epochs [153,154], i have 154, src has [1,154]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 8806400 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:07.770946+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 154 handle_osd_map epochs [155,156], i have 154, src has [1,156]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92864512 unmapped: 7725056 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:08.771139+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa8b4000/0x0/0x4ffc00000, data 0x14f0cb3/0x15d6000, compress 0x0/0x0/0x0, omap 0x161a4, meta 0x3d59e5c), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201660 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 7716864 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:09.771313+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 7716864 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:10.771457+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 7684096 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:11.771715+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa8af000/0x0/0x4ffc00000, data 0x14f4338/0x15db000, compress 0x0/0x0/0x0, omap 0x164ab, meta 0x3d59b55), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 7684096 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:12.771867+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:13.772063+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203412 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa8ac000/0x0/0x4ffc00000, data 0x14f5e17/0x15de000, compress 0x0/0x0/0x0, omap 0x16932, meta 0x3d596ce), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:14.772194+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:15.772806+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.035589218s of 12.326159477s, submitted: 104
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:16.772959+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:17.773098+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa8a9000/0x0/0x4ffc00000, data 0x14f7a4c/0x15e1000, compress 0x0/0x0/0x0, omap 0x16bbd, meta 0x3d59443), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:18.773299+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206186 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:19.773502+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:20.773691+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa8a9000/0x0/0x4ffc00000, data 0x14f7a4c/0x15e1000, compress 0x0/0x0/0x0, omap 0x16bbd, meta 0x3d59443), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:21.773885+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:22.774078+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:23.774277+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208960 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:24.774499+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 7659520 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:25.774717+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.055333138s of 10.117359161s, submitted: 72
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 7643136 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:26.774870+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa8a6000/0x0/0x4ffc00000, data 0x14f94eb/0x15e4000, compress 0x0/0x0/0x0, omap 0x16f44, meta 0x3d590bc), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 7643136 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:27.775053+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa8a3000/0x0/0x4ffc00000, data 0x14fb0f0/0x15e7000, compress 0x0/0x0/0x0, omap 0x171cf, meta 0x3d58e31), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 7643136 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:28.775231+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215118 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 7643136 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:29.775384+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 160 ms_handle_reset con 0x56437673b800 session 0x5643762d8a80
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 7643136 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:30.775607+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 160 ms_handle_reset con 0x5643756ef400 session 0x5643769e5180
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa8a1000/0x0/0x4ffc00000, data 0x14fb226/0x15e9000, compress 0x0/0x0/0x0, omap 0x171cf, meta 0x3d58e31), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93306880 unmapped: 7282688 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:31.775810+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93339648 unmapped: 7249920 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Got map version 16
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:32.775952+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 7430144 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:33.776152+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217620 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x14fe735/0x15ec000, compress 0x0/0x0/0x0, omap 0x1774b, meta 0x3d588b5), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 7430144 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:34.776316+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 7397376 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x14fe735/0x15ec000, compress 0x0/0x0/0x0, omap 0x1774b, meta 0x3d588b5), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:35.776526+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 7397376 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:36.776708+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 7397376 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:37.776845+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 7397376 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:38.777067+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217620 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 7397376 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:39.777238+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 7397376 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:40.777455+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x14fe735/0x15ec000, compress 0x0/0x0/0x0, omap 0x1774b, meta 0x3d588b5), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.240417480s of 14.388905525s, submitted: 267
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:41.777630+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:42.777786+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:43.777976+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219930 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:44.778118+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:45.778271+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:46.778422+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:47.778619+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:48.779006+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219930 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:49.779202+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:50.779400+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:51.779531+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:52.779653+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:53.779816+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219930 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:54.780137+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:55.780371+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:56.780556+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:57.780812+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:58.780991+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219930 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:59.781233+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:00.781432+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:01.781609+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:02.781791+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:03.781975+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219930 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:04.782137+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:05.782347+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:06.782484+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:07.782705+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:08.782867+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219930 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:09.783068+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:10.783285+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:11.783479+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:12.783779+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:13.783933+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219930 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:14.784078+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:15.784243+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:16.784384+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:17.784713+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:18.784874+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219930 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:19.785058+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:20.785244+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 40.604015350s of 40.622814178s, submitted: 12
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 163 handle_osd_map epochs [163,164], i have 164, src has [1,164]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:21.785392+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa898000/0x0/0x4ffc00000, data 0x1501dd9/0x15f2000, compress 0x0/0x0/0x0, omap 0x17c61, meta 0x3d5839f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:22.785554+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:23.785794+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222704 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:24.786020+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa898000/0x0/0x4ffc00000, data 0x1501dd9/0x15f2000, compress 0x0/0x0/0x0, omap 0x17c61, meta 0x3d5839f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:25.786295+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:26.786429+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:27.786642+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:28.786858+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222704 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:29.786992+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:30.787123+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa898000/0x0/0x4ffc00000, data 0x1501dd9/0x15f2000, compress 0x0/0x0/0x0, omap 0x17c61, meta 0x3d5839f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:31.787256+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:32.787423+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa898000/0x0/0x4ffc00000, data 0x1501dd9/0x15f2000, compress 0x0/0x0/0x0, omap 0x17c61, meta 0x3d5839f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:33.787689+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222704 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:34.787885+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:35.788173+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:36.788369+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa898000/0x0/0x4ffc00000, data 0x1501dd9/0x15f2000, compress 0x0/0x0/0x0, omap 0x17c61, meta 0x3d5839f), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:37.788601+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 164 handle_osd_map epochs [164,165], i have 165, src has [1,165]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.066982269s of 17.141559601s, submitted: 35
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:38.788729+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:39.788911+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:40.789161+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:41.789313+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:42.789455+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:43.789628+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:44.789800+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:45.789964+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:46.790153+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:47.790310+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:48.790472+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:49.790686+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:50.790834+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:51.791719+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:52.791884+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:53.792116+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:54.792271+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:55.792461+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:56.792603+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:57.792764+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:58.793331+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:59.793511+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:00.793629+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:01.793757+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:02.793953+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:03.794116+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:04.794251+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:05.794414+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:06.794671+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:07.794925+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:08.795055+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:09.795223+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:10.795556+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:11.795714+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:12.795844+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:13.795976+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:14.796168+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:15.796394+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:16.796544+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:17.796760+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:18.796931+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:19.797105+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:20.797265+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:21.797528+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:22.797682+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:23.797828+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:24.797962+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:25.798165+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:26.798344+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:27.798535+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 7356416 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:28.798723+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 7356416 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:29.798902+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 7356416 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:30.799036+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 52.169063568s of 52.178222656s, submitted: 12
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 ms_handle_reset con 0x564373ac8000 session 0x564376270380
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93519872 unmapped: 7069696 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:31.799180+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93519872 unmapped: 7069696 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:32.799334+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Got map version 17
Jan 23 19:11:09 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93519872 unmapped: 7069696 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:33.799487+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93519872 unmapped: 7069696 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:09 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:09 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:34.799645+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93519872 unmapped: 7069696 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:35.799794+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 6938624 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: do_command 'config diff' '{prefix=config diff}'
Jan 23 19:11:09 compute-0 ceph-osd[88072]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:36.799916+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: do_command 'config show' '{prefix=config show}'
Jan 23 19:11:09 compute-0 ceph-osd[88072]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 23 19:11:09 compute-0 ceph-osd[88072]: do_command 'counter dump' '{prefix=counter dump}'
Jan 23 19:11:09 compute-0 ceph-osd[88072]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93757440 unmapped: 6832128 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: do_command 'counter schema' '{prefix=counter schema}'
Jan 23 19:11:09 compute-0 ceph-osd[88072]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:37.800065+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93978624 unmapped: 6610944 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:11:09 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:38.800238+0000)
Jan 23 19:11:09 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 6324224 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:09 compute-0 ceph-osd[88072]: do_command 'log dump' '{prefix=log dump}'
Jan 23 19:11:09 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14560 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} v 0)
Jan 23 19:11:09 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} : dispatch
Jan 23 19:11:09 compute-0 nova_compute[240119]: 2026-01-23 19:11:09.800 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:11:09 compute-0 nova_compute[240119]: 2026-01-23 19:11:09.801 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:11:09 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 19:11:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 23 19:11:09 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2212505961' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 23 19:11:10 compute-0 ceph-mon[75097]: pgmap v1255: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 23 19:11:10 compute-0 ceph-mon[75097]: from='client.14554 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:10 compute-0 ceph-mon[75097]: from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:10 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3096066787' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 23 19:11:10 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} : dispatch
Jan 23 19:11:10 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2212505961' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 23 19:11:10 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14564 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} v 0)
Jan 23 19:11:10 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} : dispatch
Jan 23 19:11:10 compute-0 nova_compute[240119]: 2026-01-23 19:11:10.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:11:10 compute-0 nova_compute[240119]: 2026-01-23 19:11:10.348 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:11:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 23 19:11:10 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3092498351' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 23 19:11:10 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14568 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 23 19:11:11 compute-0 ceph-mon[75097]: from='client.14560 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:11 compute-0 ceph-mon[75097]: from='client.14564 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:11 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} : dispatch
Jan 23 19:11:11 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3092498351' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 23 19:11:11 compute-0 ceph-mon[75097]: from='client.14568 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 23 19:11:11 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1629044209' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 23 19:11:11 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14572 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:11 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14576 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 23 19:11:11 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2254909286' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 23 19:11:12 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14578 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:12 compute-0 ceph-mon[75097]: pgmap v1256: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 23 19:11:12 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1629044209' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 23 19:11:12 compute-0 ceph-mon[75097]: from='client.14572 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:12 compute-0 ceph-mon[75097]: from='client.14576 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:12 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2254909286' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 23 19:11:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 23 19:11:12 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1698134701' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 23 19:11:12 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14582 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 23 19:11:12 compute-0 crontab[255892]: (root) LIST (root)
Jan 23 19:11:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 23 19:11:12 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1169938308' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 23 19:11:13 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14586 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:13 compute-0 ceph-mon[75097]: from='client.14578 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:13 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1698134701' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 23 19:11:13 compute-0 ceph-mon[75097]: from='client.14582 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:13 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1169938308' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 23 19:11:13 compute-0 nova_compute[240119]: 2026-01-23 19:11:13.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:11:13 compute-0 nova_compute[240119]: 2026-01-23 19:11:13.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:11:13 compute-0 nova_compute[240119]: 2026-01-23 19:11:13.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:11:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:11:13.592 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:11:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:11:13.593 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:11:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:11:13.593 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:11:13 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14590 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:13 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:11:13.626+0000 7f06d8601640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 19:11:13 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 19:11:13 compute-0 nova_compute[240119]: 2026-01-23 19:11:13.953 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:11:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 23 19:11:13 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/971006246' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:14.893271+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 2007040 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:15.893415+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:39:45.839482+0000 osd.1 (osd.1) 122 : cluster [DBG] 2.7 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:39:45.850077+0000 osd.1 (osd.1) 123 : cluster [DBG] 2.7 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 123)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:39:45.839482+0000 osd.1 (osd.1) 122 : cluster [DBG] 2.7 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:39:45.850077+0000 osd.1 (osd.1) 123 : cluster [DBG] 2.7 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 1998848 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:16.893714+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876423 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 1998848 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:17.893932+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 1990656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:18.894077+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 1990656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:19.894295+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 1990656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:20.894507+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 1982464 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:21.894653+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876423 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 1982464 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:22.894841+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 1974272 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:23.895072+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.075523376s of 15.099667549s, submitted: 8
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 1966080 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:24.895216+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:39:53.938063+0000 osd.1 (osd.1) 124 : cluster [DBG] 2.9 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:39:53.948601+0000 osd.1 (osd.1) 125 : cluster [DBG] 2.9 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 1957888 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 125)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:39:53.938063+0000 osd.1 (osd.1) 124 : cluster [DBG] 2.9 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:39:53.948601+0000 osd.1 (osd.1) 125 : cluster [DBG] 2.9 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:25.895466+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:39:55.871061+0000 osd.1 (osd.1) 126 : cluster [DBG] 5.c scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:39:55.881636+0000 osd.1 (osd.1) 127 : cluster [DBG] 5.c scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 1949696 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 127)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:39:55.871061+0000 osd.1 (osd.1) 126 : cluster [DBG] 5.c scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:39:55.881636+0000 osd.1 (osd.1) 127 : cluster [DBG] 5.c scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:26.895708+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881245 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 1949696 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:27.895888+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:39:57.811072+0000 osd.1 (osd.1) 128 : cluster [DBG] 2.a scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:39:57.821665+0000 osd.1 (osd.1) 129 : cluster [DBG] 2.a scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 1941504 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 129)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:39:57.811072+0000 osd.1 (osd.1) 128 : cluster [DBG] 2.a scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:39:57.821665+0000 osd.1 (osd.1) 129 : cluster [DBG] 2.a scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:28.896132+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 1941504 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:29.896376+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 1941504 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:30.896963+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 1933312 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:31.897084+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:01.830336+0000 osd.1 (osd.1) 130 : cluster [DBG] 10.2 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:01.840647+0000 osd.1 (osd.1) 131 : cluster [DBG] 10.2 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 131)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:01.830336+0000 osd.1 (osd.1) 130 : cluster [DBG] 10.2 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:01.840647+0000 osd.1 (osd.1) 131 : cluster [DBG] 10.2 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886069 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 1933312 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:32.897288+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:02.808047+0000 osd.1 (osd.1) 132 : cluster [DBG] 2.3 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:02.818245+0000 osd.1 (osd.1) 133 : cluster [DBG] 2.3 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 133)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:02.808047+0000 osd.1 (osd.1) 132 : cluster [DBG] 2.3 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:02.818245+0000 osd.1 (osd.1) 133 : cluster [DBG] 2.3 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 1908736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:33.897933+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:03.787282+0000 osd.1 (osd.1) 134 : cluster [DBG] 10.b scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:03.797779+0000 osd.1 (osd.1) 135 : cluster [DBG] 10.b scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 1908736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 135)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:03.787282+0000 osd.1 (osd.1) 134 : cluster [DBG] 10.b scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:03.797779+0000 osd.1 (osd.1) 135 : cluster [DBG] 10.b scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:34.898181+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 1908736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.788429260s of 11.848600388s, submitted: 12
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:35.898336+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:05.786685+0000 osd.1 (osd.1) 136 : cluster [DBG] 5.9 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:05.797141+0000 osd.1 (osd.1) 137 : cluster [DBG] 5.9 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 1900544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 137)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:05.786685+0000 osd.1 (osd.1) 136 : cluster [DBG] 5.9 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:05.797141+0000 osd.1 (osd.1) 137 : cluster [DBG] 5.9 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:36.898525+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:06.765325+0000 osd.1 (osd.1) 138 : cluster [DBG] 2.d scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:06.775818+0000 osd.1 (osd.1) 139 : cluster [DBG] 2.d scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895715 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 1900544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 139)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:06.765325+0000 osd.1 (osd.1) 138 : cluster [DBG] 2.d scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:06.775818+0000 osd.1 (osd.1) 139 : cluster [DBG] 2.d scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:37.898736+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 1892352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:38.898878+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 1892352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:39.899049+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:09.857204+0000 osd.1 (osd.1) 140 : cluster [DBG] 5.16 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:09.867741+0000 osd.1 (osd.1) 141 : cluster [DBG] 5.16 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 1884160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 141)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:09.857204+0000 osd.1 (osd.1) 140 : cluster [DBG] 5.16 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:09.867741+0000 osd.1 (osd.1) 141 : cluster [DBG] 5.16 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:40.899307+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 1884160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:41.899446+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898128 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 1884160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:42.899608+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:12.775385+0000 osd.1 (osd.1) 142 : cluster [DBG] 10.6 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:12.785678+0000 osd.1 (osd.1) 143 : cluster [DBG] 10.6 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 143)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:12.775385+0000 osd.1 (osd.1) 142 : cluster [DBG] 10.6 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:12.785678+0000 osd.1 (osd.1) 143 : cluster [DBG] 10.6 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 1851392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:43.899865+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 1851392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:44.900032+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 1843200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:45.900223+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 1843200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:46.900343+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 900541 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 1835008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:47.900497+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 1835008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:48.900673+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 1818624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:49.900883+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 1802240 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.027151108s of 15.058136940s, submitted: 8
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:50.901097+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:20.844799+0000 osd.1 (osd.1) 144 : cluster [DBG] 10.19 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:20.855407+0000 osd.1 (osd.1) 145 : cluster [DBG] 10.19 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 145)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:20.844799+0000 osd.1 (osd.1) 144 : cluster [DBG] 10.19 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:20.855407+0000 osd.1 (osd.1) 145 : cluster [DBG] 10.19 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81035264 unmapped: 1794048 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:51.901357+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:21.835986+0000 osd.1 (osd.1) 146 : cluster [DBG] 10.1a scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:21.846514+0000 osd.1 (osd.1) 147 : cluster [DBG] 10.1a scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 147)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:21.835986+0000 osd.1 (osd.1) 146 : cluster [DBG] 10.1a scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:21.846514+0000 osd.1 (osd.1) 147 : cluster [DBG] 10.1a scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905371 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 1769472 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:52.901603+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 1769472 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:53.901797+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:23.809468+0000 osd.1 (osd.1) 148 : cluster [DBG] 2.15 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:23.820091+0000 osd.1 (osd.1) 149 : cluster [DBG] 2.15 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 1761280 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:54.902040+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 149)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:23.809468+0000 osd.1 (osd.1) 148 : cluster [DBG] 2.15 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:23.820091+0000 osd.1 (osd.1) 149 : cluster [DBG] 2.15 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 1761280 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:55.902238+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 1761280 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:56.902408+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 1 last_log 150 sent 149 num 1 unsent 1 sending 1
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:26.892685+0000 osd.1 (osd.1) 150 : cluster [DBG] 5.12 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 150)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:26.892685+0000 osd.1 (osd.1) 150 : cluster [DBG] 5.12 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910197 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 1753088 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:57.902701+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 1 last_log 151 sent 150 num 1 unsent 1 sending 1
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:26.903160+0000 osd.1 (osd.1) 151 : cluster [DBG] 5.12 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 151)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:26.903160+0000 osd.1 (osd.1) 151 : cluster [DBG] 5.12 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 1753088 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:58.902894+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:28.814932+0000 osd.1 (osd.1) 152 : cluster [DBG] 2.17 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:28.825474+0000 osd.1 (osd.1) 153 : cluster [DBG] 2.17 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 153)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:28.814932+0000 osd.1 (osd.1) 152 : cluster [DBG] 2.17 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:28.825474+0000 osd.1 (osd.1) 153 : cluster [DBG] 2.17 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 1744896 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:59.903059+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 1736704 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:00.903233+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 1736704 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:01.903416+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912610 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 1728512 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:02.903618+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 1728512 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.849177361s of 12.963145256s, submitted: 10
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:03.903853+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:33.807958+0000 osd.1 (osd.1) 154 : cluster [DBG] 5.13 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:33.818443+0000 osd.1 (osd.1) 155 : cluster [DBG] 5.13 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 1720320 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 155)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:33.807958+0000 osd.1 (osd.1) 154 : cluster [DBG] 5.13 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:33.818443+0000 osd.1 (osd.1) 155 : cluster [DBG] 5.13 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:04.904130+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:34.832106+0000 osd.1 (osd.1) 156 : cluster [DBG] 5.11 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:34.842746+0000 osd.1 (osd.1) 157 : cluster [DBG] 5.11 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 1695744 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 157)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:34.832106+0000 osd.1 (osd.1) 156 : cluster [DBG] 5.11 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:34.842746+0000 osd.1 (osd.1) 157 : cluster [DBG] 5.11 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:05.904374+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 1687552 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:06.904552+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:36.803786+0000 osd.1 (osd.1) 158 : cluster [DBG] 5.f scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:36.814472+0000 osd.1 (osd.1) 159 : cluster [DBG] 5.f scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919847 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 1687552 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 159)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:36.803786+0000 osd.1 (osd.1) 158 : cluster [DBG] 5.f scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:36.814472+0000 osd.1 (osd.1) 159 : cluster [DBG] 5.f scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:07.904764+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:37.851874+0000 osd.1 (osd.1) 160 : cluster [DBG] 10.14 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:37.866091+0000 osd.1 (osd.1) 161 : cluster [DBG] 10.14 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 1679360 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 161)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:37.851874+0000 osd.1 (osd.1) 160 : cluster [DBG] 10.14 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:37.866091+0000 osd.1 (osd.1) 161 : cluster [DBG] 10.14 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:08.904963+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 1662976 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:09.905130+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:39.866300+0000 osd.1 (osd.1) 162 : cluster [DBG] 10.12 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:39.880401+0000 osd.1 (osd.1) 163 : cluster [DBG] 10.12 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 1654784 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 163)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:39.866300+0000 osd.1 (osd.1) 162 : cluster [DBG] 10.12 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:39.880401+0000 osd.1 (osd.1) 163 : cluster [DBG] 10.12 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:10.905339+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 1646592 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:11.905531+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924677 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 1646592 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:12.905676+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:42.817723+0000 osd.1 (osd.1) 164 : cluster [DBG] 6.2 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:42.828244+0000 osd.1 (osd.1) 165 : cluster [DBG] 6.2 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 1638400 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:13.905928+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 165)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:42.817723+0000 osd.1 (osd.1) 164 : cluster [DBG] 6.2 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:42.828244+0000 osd.1 (osd.1) 165 : cluster [DBG] 6.2 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 1622016 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:14.906053+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 1622016 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:15.906182+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 1613824 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:16.906323+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927088 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 1613824 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.755460739s of 13.953031540s, submitted: 12
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:17.906478+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:47.761028+0000 osd.1 (osd.1) 166 : cluster [DBG] 6.6 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:47.775288+0000 osd.1 (osd.1) 167 : cluster [DBG] 6.6 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 167)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:47.761028+0000 osd.1 (osd.1) 166 : cluster [DBG] 6.6 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:47.775288+0000 osd.1 (osd.1) 167 : cluster [DBG] 6.6 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 1597440 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:18.906652+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:48.748599+0000 osd.1 (osd.1) 168 : cluster [DBG] 6.4 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:48.773607+0000 osd.1 (osd.1) 169 : cluster [DBG] 6.4 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 169)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:48.748599+0000 osd.1 (osd.1) 168 : cluster [DBG] 6.4 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:48.773607+0000 osd.1 (osd.1) 169 : cluster [DBG] 6.4 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 1597440 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:19.906859+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:49.781247+0000 osd.1 (osd.1) 170 : cluster [DBG] 6.d scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:49.798984+0000 osd.1 (osd.1) 171 : cluster [DBG] 6.d scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 171)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:49.781247+0000 osd.1 (osd.1) 170 : cluster [DBG] 6.d scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:49.798984+0000 osd.1 (osd.1) 171 : cluster [DBG] 6.d scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 1589248 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:20.907096+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 1589248 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:21.907272+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934321 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 1581056 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:22.907409+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:52.798616+0000 osd.1 (osd.1) 172 : cluster [DBG] 6.e scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:52.812224+0000 osd.1 (osd.1) 173 : cluster [DBG] 6.e scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 173)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:52.798616+0000 osd.1 (osd.1) 172 : cluster [DBG] 6.e scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:52.812224+0000 osd.1 (osd.1) 173 : cluster [DBG] 6.e scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 1556480 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:23.907666+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 1556480 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:24.907880+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 1548288 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:25.908031+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 1548288 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:26.908172+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936732 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 1540096 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:27.908331+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 1540096 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.066595078s of 11.083057404s, submitted: 8
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:28.908467+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:58.844065+0000 osd.1 (osd.1) 174 : cluster [DBG] 6.1 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:40:58.854612+0000 osd.1 (osd.1) 175 : cluster [DBG] 6.1 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 175)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:58.844065+0000 osd.1 (osd.1) 174 : cluster [DBG] 6.1 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:40:58.854612+0000 osd.1 (osd.1) 175 : cluster [DBG] 6.1 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 1540096 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:29.908637+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 1531904 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:30.908814+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 1531904 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:31.908975+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939143 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 1523712 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:32.909165+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 1523712 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:33.909327+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 1507328 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:34.909506+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 1507328 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:35.909674+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 1507328 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:36.909821+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939143 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 1499136 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:37.909967+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 1499136 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:38.910086+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 1490944 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:39.910201+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 1490944 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:40.910409+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 1482752 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:41.910646+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939143 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 1482752 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:42.910796+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 1482752 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.919639587s of 14.925643921s, submitted: 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:43.910950+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:13.769832+0000 osd.1 (osd.1) 176 : cluster [DBG] 6.c scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:13.783983+0000 osd.1 (osd.1) 177 : cluster [DBG] 6.c scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 177)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:13.769832+0000 osd.1 (osd.1) 176 : cluster [DBG] 6.c scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:13.783983+0000 osd.1 (osd.1) 177 : cluster [DBG] 6.c scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 1474560 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:44.911201+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 1474560 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:45.911386+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:15.746134+0000 osd.1 (osd.1) 178 : cluster [DBG] 6.b scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:15.760391+0000 osd.1 (osd.1) 179 : cluster [DBG] 6.b scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 179)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:15.746134+0000 osd.1 (osd.1) 178 : cluster [DBG] 6.b scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:15.760391+0000 osd.1 (osd.1) 179 : cluster [DBG] 6.b scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 1474560 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:46.911651+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943965 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 1466368 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:47.911855+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 1466368 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:48.912026+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 1458176 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:49.912231+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 1458176 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:50.912442+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 1449984 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:51.912619+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:21.787916+0000 osd.1 (osd.1) 180 : cluster [DBG] 11.16 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:21.798553+0000 osd.1 (osd.1) 181 : cluster [DBG] 11.16 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 181)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:21.787916+0000 osd.1 (osd.1) 180 : cluster [DBG] 11.16 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:21.798553+0000 osd.1 (osd.1) 181 : cluster [DBG] 11.16 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946380 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 1449984 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:52.912831+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:22.837661+0000 osd.1 (osd.1) 182 : cluster [DBG] 11.13 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:22.847984+0000 osd.1 (osd.1) 183 : cluster [DBG] 11.13 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 183)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:22.837661+0000 osd.1 (osd.1) 182 : cluster [DBG] 11.13 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:22.847984+0000 osd.1 (osd.1) 183 : cluster [DBG] 11.13 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 1449984 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:53.913037+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:54.913328+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.a scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.744715691s of 12.033573151s, submitted: 8
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.a scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:55.913512+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:25.803357+0000 osd.1 (osd.1) 184 : cluster [DBG] 11.a scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:25.813896+0000 osd.1 (osd.1) 185 : cluster [DBG] 11.a scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 185)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:25.803357+0000 osd.1 (osd.1) 184 : cluster [DBG] 11.a scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:25.813896+0000 osd.1 (osd.1) 185 : cluster [DBG] 11.a scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 1409024 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:56.913735+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:26.787092+0000 osd.1 (osd.1) 186 : cluster [DBG] 11.c scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:26.797641+0000 osd.1 (osd.1) 187 : cluster [DBG] 11.c scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 187)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:26.787092+0000 osd.1 (osd.1) 186 : cluster [DBG] 11.c scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:26.797641+0000 osd.1 (osd.1) 187 : cluster [DBG] 11.c scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953621 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 1409024 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:57.913920+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 1400832 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:58.914052+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 1400832 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:59.914204+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 1392640 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:00.914421+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 1392640 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:01.914547+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953621 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 1392640 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:02.914705+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 1384448 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:03.914845+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 1384448 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:04.914983+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 1376256 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:05.915153+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 1376256 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.081073761s of 11.089123726s, submitted: 4
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:06.915307+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:36.892479+0000 osd.1 (osd.1) 188 : cluster [DBG] 11.5 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:36.903117+0000 osd.1 (osd.1) 189 : cluster [DBG] 11.5 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 189)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:36.892479+0000 osd.1 (osd.1) 188 : cluster [DBG] 11.5 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:36.903117+0000 osd.1 (osd.1) 189 : cluster [DBG] 11.5 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956034 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 1376256 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:07.915519+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 1359872 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:08.915676+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 1359872 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:09.915866+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 1351680 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:10.916018+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 1351680 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:11.916151+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956034 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 1351680 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:12.916377+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 1343488 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:13.916540+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 1327104 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:14.916705+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:43.957433+0000 osd.1 (osd.1) 190 : cluster [DBG] 11.0 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:43.971350+0000 osd.1 (osd.1) 191 : cluster [DBG] 11.0 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 191)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:43.957433+0000 osd.1 (osd.1) 190 : cluster [DBG] 11.0 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:43.971350+0000 osd.1 (osd.1) 191 : cluster [DBG] 11.0 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 1318912 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:15.916918+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1310720 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:16.917062+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:45.921354+0000 osd.1 (osd.1) 192 : cluster [DBG] 11.7 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:45.931909+0000 osd.1 (osd.1) 193 : cluster [DBG] 11.7 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 193)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:45.921354+0000 osd.1 (osd.1) 192 : cluster [DBG] 11.7 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:45.931909+0000 osd.1 (osd.1) 193 : cluster [DBG] 11.7 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960860 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 1302528 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.941231728s of 10.954809189s, submitted: 6
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:17.917815+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:47.847427+0000 osd.1 (osd.1) 194 : cluster [DBG] 11.1d scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:47.857814+0000 osd.1 (osd.1) 195 : cluster [DBG] 11.1d scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 195)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:47.847427+0000 osd.1 (osd.1) 194 : cluster [DBG] 11.1d scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:47.857814+0000 osd.1 (osd.1) 195 : cluster [DBG] 11.1d scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 1302528 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:18.918022+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 1 last_log 196 sent 195 num 1 unsent 1 sending 1
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:48.894155+0000 osd.1 (osd.1) 196 : cluster [DBG] 9.15 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 196)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:48.894155+0000 osd.1 (osd.1) 196 : cluster [DBG] 9.15 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 1277952 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:19.918267+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 1 last_log 197 sent 196 num 1 unsent 1 sending 1
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:48.918863+0000 osd.1 (osd.1) 197 : cluster [DBG] 9.15 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 197)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:48.918863+0000 osd.1 (osd.1) 197 : cluster [DBG] 9.15 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 1277952 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:20.918614+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 1277952 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:21.918835+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965688 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 1269760 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:22.919015+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 1269760 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:23.919161+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1261568 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:24.919293+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1261568 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:25.919529+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1261568 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:26.919698+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:56.778070+0000 osd.1 (osd.1) 198 : cluster [DBG] 9.14 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:56.816857+0000 osd.1 (osd.1) 199 : cluster [DBG] 9.14 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 199)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:56.778070+0000 osd.1 (osd.1) 198 : cluster [DBG] 9.14 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:56.816857+0000 osd.1 (osd.1) 199 : cluster [DBG] 9.14 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968101 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1253376 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:27.919905+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:57.819030+0000 osd.1 (osd.1) 200 : cluster [DBG] 9.10 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:41:57.836641+0000 osd.1 (osd.1) 201 : cluster [DBG] 9.10 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 201)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:57.819030+0000 osd.1 (osd.1) 200 : cluster [DBG] 9.10 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:41:57.836641+0000 osd.1 (osd.1) 201 : cluster [DBG] 9.10 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1245184 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:28.920177+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1236992 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:29.920332+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1236992 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:30.920535+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1236992 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:31.920757+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970514 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 1228800 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:32.920884+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 1228800 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.970235825s of 15.995883942s, submitted: 8
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:33.921049+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:42:03.843329+0000 osd.1 (osd.1) 202 : cluster [DBG] 9.12 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:42:03.873002+0000 osd.1 (osd.1) 203 : cluster [DBG] 9.12 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 203)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:42:03.843329+0000 osd.1 (osd.1) 202 : cluster [DBG] 9.12 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:42:03.873002+0000 osd.1 (osd.1) 203 : cluster [DBG] 9.12 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1196032 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:34.921292+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 1187840 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:35.921456+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:42:05.791966+0000 osd.1 (osd.1) 204 : cluster [DBG] 9.2 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:42:05.834290+0000 osd.1 (osd.1) 205 : cluster [DBG] 9.2 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 205)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:42:05.791966+0000 osd.1 (osd.1) 204 : cluster [DBG] 9.2 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:42:05.834290+0000 osd.1 (osd.1) 205 : cluster [DBG] 9.2 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 1179648 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:36.921692+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:42:06.743017+0000 osd.1 (osd.1) 206 : cluster [DBG] 9.0 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:42:06.792435+0000 osd.1 (osd.1) 207 : cluster [DBG] 9.0 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 207)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:42:06.743017+0000 osd.1 (osd.1) 206 : cluster [DBG] 9.0 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:42:06.792435+0000 osd.1 (osd.1) 207 : cluster [DBG] 9.0 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977749 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 1179648 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:37.921936+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 1171456 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:38.922211+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 209 sent 207 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:42:08.752953+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.a scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:42:08.795299+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.a scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 209)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:42:08.752953+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.a scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:42:08.795299+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.a scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1163264 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:39.922443+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1163264 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:40.922616+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 211 sent 209 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:42:10.713417+0000 osd.1 (osd.1) 210 : cluster [DBG] 9.4 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:42:10.759543+0000 osd.1 (osd.1) 211 : cluster [DBG] 9.4 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 211)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:42:10.713417+0000 osd.1 (osd.1) 210 : cluster [DBG] 9.4 scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:42:10.759543+0000 osd.1 (osd.1) 211 : cluster [DBG] 9.4 scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1163264 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:41.922795+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982571 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 1155072 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:42.923084+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 1155072 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:43.923700+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 213 sent 211 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:42:13.692925+0000 osd.1 (osd.1) 212 : cluster [DBG] 9.1a scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:42:13.717649+0000 osd.1 (osd.1) 213 : cluster [DBG] 9.1a scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 213)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:42:13.692925+0000 osd.1 (osd.1) 212 : cluster [DBG] 9.1a scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:42:13.717649+0000 osd.1 (osd.1) 213 : cluster [DBG] 9.1a scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 1146880 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:44.923899+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 1146880 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:45.924082+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.793076515s of 12.894679070s, submitted: 12
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 1138688 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:46.924235+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  log_queue is 2 last_log 215 sent 213 num 2 unsent 2 sending 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:42:16.738049+0000 osd.1 (osd.1) 214 : cluster [DBG] 9.1f scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  will send 2026-01-23T18:42:16.766337+0000 osd.1 (osd.1) 215 : cluster [DBG] 9.1f scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client handle_log_ack log(last 215)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:42:16.738049+0000 osd.1 (osd.1) 214 : cluster [DBG] 9.1f scrub starts
Jan 23 19:11:14 compute-0 ceph-osd[87009]: log_client  logged 2026-01-23T18:42:16.766337+0000 osd.1 (osd.1) 215 : cluster [DBG] 9.1f scrub ok
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 1130496 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:47.924460+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 1130496 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:48.924657+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1122304 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:49.924848+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1122304 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:50.925176+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1122304 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:51.925363+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81715200 unmapped: 1114112 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:52.925628+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81715200 unmapped: 1114112 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:53.925791+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1105920 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:54.925997+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1105920 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:55.926144+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1097728 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:56.926300+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1097728 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:57.926471+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1097728 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:58.926604+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 1089536 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:59.926733+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 1073152 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:00.926922+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:01.927077+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 1064960 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:02.927216+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 1064960 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:03.927355+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 1056768 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:04.927540+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 1048576 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:05.927804+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 1048576 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:06.928057+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:07.928223+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:08.928470+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:09.928652+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 1032192 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:10.928876+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 1032192 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:11.929042+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 1024000 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:12.929205+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 1024000 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:13.929353+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 1024000 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:14.929491+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 1015808 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:15.929670+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 1015808 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:16.929820+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 1007616 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:17.930067+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 1007616 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:18.930349+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 999424 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:19.930681+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 999424 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:20.930941+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 999424 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:21.931202+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 991232 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:22.931483+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 991232 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:23.931758+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 974848 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:24.932020+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 974848 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:25.932251+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 974848 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:26.932488+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 966656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:27.932734+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 966656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:28.932990+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 958464 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:29.933221+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 958464 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:30.933523+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 958464 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:31.933789+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 950272 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:32.933936+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 942080 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:33.934093+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 942080 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:34.934278+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 942080 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:35.934515+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 933888 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:36.934815+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 933888 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:37.935084+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 933888 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:38.935326+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 925696 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:39.935578+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 925696 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:40.936038+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 917504 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:41.936285+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 917504 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:42.936540+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 909312 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:43.936790+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 909312 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:44.937054+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 909312 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:45.937248+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 901120 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:46.937395+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 901120 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:47.937548+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:48.937741+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:49.937876+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:50.938061+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:51.938233+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:52.938490+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:53.938683+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:54.938863+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:55.939081+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:56.939329+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:57.939611+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:58.939914+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:59.940154+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:00.940432+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:01.940604+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:02.940740+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:03.940889+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:04.941030+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:05.941234+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:06.941439+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:07.941612+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:08.941729+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:09.941847+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:10.942032+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:11.942205+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 778240 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:12.942348+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 778240 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:13.942483+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 770048 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:14.942667+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 770048 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:15.942839+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 770048 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:16.942975+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 761856 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:17.943140+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 761856 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:18.943289+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 753664 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:19.943489+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 745472 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:20.943743+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 737280 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:21.944033+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 737280 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:22.944212+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 737280 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:23.944409+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 729088 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:24.944663+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 729088 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:25.944836+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 729088 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:26.944961+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 720896 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:27.945155+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 720896 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:28.945265+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 712704 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:29.945420+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 712704 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:30.945665+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 704512 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:31.945841+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 704512 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:32.946011+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 704512 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:33.946169+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 688128 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:34.946307+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 688128 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:35.946583+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 679936 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:36.946733+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 679936 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:37.946851+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 671744 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:38.946964+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 671744 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:39.947108+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 663552 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:40.947299+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 663552 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:41.947419+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 663552 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:42.947616+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 655360 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:43.947784+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 655360 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:44.947968+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 655360 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:45.948111+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:46.948274+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:47.948419+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 638976 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:48.948578+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 638976 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:49.948738+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 630784 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:50.948939+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 630784 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:51.949110+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 630784 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:52.949291+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 622592 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:53.949435+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 614400 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:54.949587+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 606208 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:55.949794+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 606208 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:56.949940+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 598016 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:57.950063+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 598016 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:58.950206+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 589824 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:59.950366+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 589824 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:00.950689+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 573440 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:01.950887+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 565248 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:02.950995+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 565248 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:03.951137+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 565248 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:04.951304+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 557056 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:05.951500+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 557056 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:06.951653+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 557056 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:07.951793+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 557056 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:08.951954+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 548864 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:09.952147+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 548864 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:10.952299+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 548864 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:11.952421+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 540672 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:12.952543+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 540672 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:13.952665+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 532480 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:14.952791+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 532480 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:15.952924+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 532480 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:16.953094+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 524288 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:17.953259+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 516096 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:18.953509+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 516096 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:19.953716+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 516096 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:20.953937+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 507904 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:21.954091+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 507904 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:22.954244+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 499712 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:23.954446+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 499712 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:24.954633+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 499712 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:25.954781+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 491520 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:26.954967+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 491520 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:27.955134+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 483328 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:28.955318+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 483328 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:29.955470+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 483328 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:30.955654+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 475136 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:31.955875+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 475136 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:32.956038+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 466944 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:33.956183+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 466944 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:34.956367+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 466944 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:35.956498+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 458752 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:36.956625+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 458752 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:37.956744+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 450560 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:38.956887+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 450560 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:39.957055+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 450560 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:40.957258+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 442368 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:41.957438+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 442368 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:42.957704+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 434176 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:43.957918+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 434176 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:44.958094+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 434176 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:45.958253+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 425984 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:46.958437+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 425984 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:47.958595+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 417792 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:48.958707+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 417792 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:49.958846+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 409600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:50.959023+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 409600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:51.959138+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 409600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:52.959263+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 401408 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:53.959385+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 401408 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:54.959544+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 401408 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:55.959726+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 6806 writes, 28K keys, 6806 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6806 writes, 1252 syncs, 5.44 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6806 writes, 28K keys, 6806 commit groups, 1.0 writes per commit group, ingest: 19.71 MB, 0.03 MB/s
                                           Interval WAL: 6806 writes, 1252 syncs, 5.44 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b27a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b27a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b27a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 335872 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:56.959852+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 335872 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:57.959987+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 327680 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:58.960121+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 327680 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:59.960289+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 327680 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:00.960465+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 311296 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:01.960603+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 311296 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:02.960739+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 303104 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:03.960883+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 303104 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:04.961054+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 294912 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:05.961198+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 294912 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:06.961310+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 294912 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:07.961463+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 286720 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:08.961619+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 286720 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:09.961784+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 278528 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:10.961933+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 278528 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:11.962072+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 278528 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:12.962204+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 270336 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:13.962347+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 270336 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:14.962507+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 262144 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:15.962657+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 262144 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:16.962797+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 253952 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:17.962969+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 253952 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:18.963114+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 253952 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:19.963255+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 245760 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:20.963429+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 245760 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:21.963658+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 237568 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:22.963831+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 237568 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:23.964029+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 229376 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:24.964188+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 229376 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:25.964311+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 229376 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:26.964731+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 221184 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:27.964901+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 221184 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:28.965012+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 212992 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:29.965130+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 212992 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:30.965281+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 204800 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:31.965390+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 204800 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:32.965613+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 204800 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:33.965735+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 196608 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:34.965839+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 188416 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:35.965980+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 188416 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:36.966099+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 180224 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:37.966218+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 180224 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:38.966352+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 270336 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:39.966474+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 270336 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:40.966613+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 262144 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:41.966739+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 262144 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:42.966876+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 253952 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:43.967016+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 253952 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:44.967196+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 253952 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:45.967332+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 245760 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:46.967481+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 245760 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:47.967649+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 237568 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:48.967845+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 237568 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:49.968044+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 229376 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:50.968216+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 229376 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:51.968407+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 229376 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:52.968614+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 221184 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:53.968765+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 221184 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:54.968951+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 212992 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:55.969090+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 212992 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:56.969234+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 204800 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:57.969382+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 204800 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:58.969554+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 204800 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:59.969741+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 196608 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:00.969988+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 196608 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:01.970155+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 188416 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:02.970281+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 256.302703857s of 256.307067871s, submitted: 2
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 1327104 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:03.970407+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 1318912 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:04.970613+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 1318912 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:05.970739+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 1318912 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:06.970866+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 1318912 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:07.970999+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 1318912 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:08.971161+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 1318912 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:09.971318+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 1318912 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:10.971496+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 1318912 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:11.971638+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 1318912 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:12.971769+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1310720 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:13.971894+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1310720 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:14.972034+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 1302528 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:15.972219+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 1302528 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:16.972397+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 1294336 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:17.972529+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 1294336 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:18.972716+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 1294336 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:19.972861+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 1286144 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:20.973045+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 1286144 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:21.973184+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1277952 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:22.973304+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1277952 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:23.973454+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1277952 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:24.973610+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 1269760 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:25.973739+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 1269760 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:26.973911+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1261568 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:27.974062+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1261568 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:28.974216+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1261568 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:29.974409+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 1253376 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:30.974646+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 1253376 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:31.974790+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 1245184 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:32.974917+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 1245184 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:33.975056+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 1245184 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:34.975204+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1236992 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:35.975332+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1236992 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:36.975493+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1228800 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:37.975616+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1228800 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:38.975758+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:39.975916+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1228800 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:40.976157+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 1212416 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:41.976352+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 1212416 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:42.976604+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 1204224 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:43.976755+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 1204224 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:44.976872+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 1196032 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:45.976998+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 1196032 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:46.977159+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 1196032 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:47.977317+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 1187840 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:48.977442+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 1187840 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:49.977630+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 1179648 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:50.977793+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 1179648 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:51.977938+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 1171456 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:52.978094+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 1171456 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:53.978221+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 1171456 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:54.978331+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1163264 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:55.978481+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1163264 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:56.978629+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:57.978756+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:58.978932+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:59.979138+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:00.979328+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:01.979479+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:02.979632+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:03.979943+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:04.980060+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:05.980200+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:06.980372+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:07.980552+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:08.980752+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:09.980872+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:10.981032+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:11.981173+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:12.981361+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:13.981507+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:14.981753+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:15.981932+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:16.982080+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:17.982262+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:18.982402+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:19.982637+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:20.982891+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:21.983044+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:22.987870+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:23.988001+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:24.988232+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:25.988386+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:26.988548+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:27.988737+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:28.988882+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:29.989055+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:30.989288+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:31.989627+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:32.989924+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:33.990113+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:34.990304+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:35.990798+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:36.991080+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:37.991377+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:38.991524+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:39.991775+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:40.992173+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:41.992320+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:42.992487+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:43.992664+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:44.992777+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:45.992914+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:46.993044+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:47.993164+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:48.993291+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:49.993440+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:50.993620+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:51.993756+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:52.993929+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:53.994344+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:54.994486+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:55.994624+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:56.994777+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:57.994914+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:58.995078+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:59.995265+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:00.995423+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:01.995619+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:02.995760+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:03.995924+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:04.996047+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:05.996176+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:06.996295+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:07.996437+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:08.996621+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:09.996753+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:10.996916+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:11.997080+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:12.997291+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:13.997527+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:14.997762+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:15.997953+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:16.998353+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:17.998598+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:18.998814+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:19.998989+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:21.000250+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:22.000393+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:23.000523+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:24.000731+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:25.000891+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:26.001103+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:27.001320+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:28.001520+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:29.001653+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:30.001790+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:31.002058+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:32.002402+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:33.002633+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:34.002806+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:35.003003+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:36.003203+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:37.003354+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:38.003534+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:39.003772+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:40.003954+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:41.004114+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:42.004257+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:43.004437+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:44.004625+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:45.004782+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:46.004916+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:47.005077+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:48.005237+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:49.005342+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:50.005472+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:51.005726+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:52.005886+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:53.006046+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:54.006634+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:55.006857+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:56.006996+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:57.007144+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:58.007297+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:59.007481+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:00.007676+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:01.007810+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:02.007955+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:03.008119+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:04.008286+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:05.008514+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:06.008638+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:07.008860+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:08.009077+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:09.009226+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:10.009380+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:11.009547+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:12.009714+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:13.009830+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:14.009954+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:15.010221+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:16.010381+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:17.010533+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:18.010697+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:19.010851+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:20.010975+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:21.011256+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:22.011388+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:23.011539+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:24.011771+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:25.011901+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:26.012032+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:27.012190+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:28.012372+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:29.012495+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:30.012623+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:31.012786+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:32.012906+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:33.013030+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:34.013192+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:35.013320+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 1081344 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:36.013438+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 1081344 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:37.013643+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 1081344 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:38.013786+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 1081344 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:39.014134+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 1081344 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:40.014273+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 1081344 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:41.014442+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:42.014666+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:43.014828+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:44.014972+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:45.015155+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:46.015552+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:47.015783+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:48.015967+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:49.016264+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:50.016425+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:51.016642+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:52.016788+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:53.016920+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:54.017132+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:55.017298+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:56.017462+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:57.017689+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:58.017889+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:59.018133+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:00.018359+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:01.018506+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 1064960 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:02.018676+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 1064960 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:03.018884+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 1064960 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc ms_handle_reset ms_handle_reset con 0x55944a35e000
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4161918394
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: get_auth_request con 0x55944a35f400 auth_method 0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc handle_mgr_configure stats_period=5
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:04.019080+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 ms_handle_reset con 0x55944adaa000 session 0x55944ae53180
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944adaa800
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 ms_handle_reset con 0x55944adab800 session 0x55944ae52c40
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944a48a000
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:05.019254+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:06.019413+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:07.019584+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:08.019785+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:09.019913+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:10.020054+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:11.020241+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:12.020375+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:13.020510+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:14.020678+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:15.020877+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:16.020985+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:17.021105+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:18.021248+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:19.021451+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:20.021654+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:21.021857+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:22.022042+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:23.022176+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:24.022383+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:25.022519+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:26.022708+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:27.022972+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:28.023099+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:29.023233+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:30.023368+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:31.023537+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:32.023641+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:33.023780+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:34.023901+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:35.024036+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:36.024153+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:37.024304+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:38.024432+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:39.024627+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:40.024787+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:41.024932+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:42.025070+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:43.025256+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:44.025546+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:45.025700+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:46.026057+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:47.026193+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:48.026342+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:49.026484+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:50.026602+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:51.026767+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:52.026896+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:53.027043+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:54.027164+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:55.027325+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:56.027494+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:57.027627+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:58.027803+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:59.027968+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:00.028457+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:01.028659+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 860160 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:02.028999+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 860160 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:03.029152+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 860160 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 300.103851318s of 300.344635010s, submitted: 90
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:04.029375+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:05.029527+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:06.029896+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:07.030280+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:08.030465+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:09.030603+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:10.030743+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:11.030883+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:12.031026+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:13.031150+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:14.031279+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:15.031401+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:16.031536+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:17.031692+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:18.031908+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:19.032049+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:20.032204+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:21.032346+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:22.032595+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:23.032759+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:24.032894+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:25.033034+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:26.033194+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:27.033343+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:28.033497+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:29.033641+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:30.033746+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:31.033885+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:32.034022+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:33.034169+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:34.034307+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:35.034436+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:36.034552+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:37.034702+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:38.034856+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:39.034993+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:40.035120+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:41.035269+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:42.035736+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:43.035920+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:44.036088+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:45.036209+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:46.036345+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:47.036487+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:48.036696+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:49.036816+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:50.036924+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:51.037091+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:52.037258+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:53.037397+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:54.037590+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:55.037698+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:56.037832+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:57.037968+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:58.038093+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:59.038227+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:00.038375+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:01.038595+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:02.038822+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:03.038961+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:04.039201+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:05.039349+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:06.039509+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:07.039672+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:08.039826+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:09.039962+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:10.040080+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:11.040248+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:12.040442+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:13.040643+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:14.040768+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:15.040913+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:16.041091+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:17.041255+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:18.041395+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:19.041546+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:20.041733+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:21.041939+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:22.042066+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:23.042226+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:24.042356+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:25.042476+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:26.042625+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:27.042810+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:28.043096+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:29.043309+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:30.043470+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:31.043710+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:32.043839+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:33.043969+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:34.044166+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:35.044314+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:36.044445+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:37.044610+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:38.044786+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:39.045029+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:40.045144+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:41.045290+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:42.045420+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:43.045579+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:44.045709+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:45.045896+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:46.046102+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:47.046340+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:48.046503+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:49.046672+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:50.046804+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:51.047044+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:52.047177+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:53.047302+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:54.047432+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:55.047604+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:56.047707+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:57.047826+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:58.047952+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:59.048053+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:00.048936+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:01.049241+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:02.049642+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:03.049974+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:04.050195+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:05.050321+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:06.050495+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:07.050653+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:08.050798+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:09.050936+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:10.051096+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:11.051368+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:12.051511+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:13.051689+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:14.051927+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:15.052063+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:16.052208+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:17.052409+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:18.052613+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:19.052831+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:20.052997+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:21.053156+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:22.053381+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:23.053588+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:24.053776+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:25.053952+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:26.054141+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:27.054360+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:28.054589+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:29.054796+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:30.055021+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:31.055213+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:32.055391+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:33.055588+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:34.055735+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:35.055840+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:36.056102+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:37.056228+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:38.056428+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:39.056685+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:40.056884+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:41.057099+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:42.057403+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:43.057631+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:44.057831+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:45.058025+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:46.058306+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:47.058518+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:48.058646+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:49.058786+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:50.058996+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:51.059235+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:52.059501+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:53.059730+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:54.059862+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:55.060020+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:56.060234+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:57.060374+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:58.060495+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:59.060631+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:00.060754+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:01.060932+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:02.061170+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:03.061394+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:04.061639+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:05.061798+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:06.061958+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:07.062118+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:08.062265+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:09.062392+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:10.062459+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:11.062621+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:12.062903+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:13.063102+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread fragmentation_score=0.000127 took=0.000037s
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:14.063322+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:15.063469+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:16.063750+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:17.063939+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:18.064062+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:19.064323+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:20.064502+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:21.064702+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:22.064877+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:23.065047+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:24.065196+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:25.065335+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:26.065540+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:27.065751+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:28.065935+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:29.066620+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:30.067312+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:31.067605+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:32.068304+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:33.068448+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 753664 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:34.068999+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 753664 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:35.069429+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 753664 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:36.069888+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 753664 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:37.070161+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:38.070407+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 753664 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:39.070629+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 753664 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:40.070950+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:41.071299+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:42.071655+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:43.071831+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:44.072087+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:45.072335+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:46.072667+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:47.072826+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:48.073066+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:49.073420+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:50.073638+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:51.073899+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:52.074098+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:53.074304+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:54.074436+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:55.074575+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:56.074836+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 7030 writes, 29K keys, 7030 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7030 writes, 1364 syncs, 5.15 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b27a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b27a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b27a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:57.075103+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:58.075504+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:59.075844+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:00.075987+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:01.076186+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:02.076320+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:03.076464+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:04.076605+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:05.076861+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:06.076990+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:07.077117+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:08.077299+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:09.077479+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:10.077647+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:11.077859+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:12.078052+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:13.078253+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:14.078396+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:15.078548+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:16.078760+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:17.079006+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:18.079241+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:19.079450+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:20.079662+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:21.079936+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:22.080136+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:23.080302+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:24.080470+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:25.080687+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:26.080857+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:27.081064+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:28.081293+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:29.081604+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:30.081819+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:31.082050+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:32.082246+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:33.082425+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:34.082608+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:35.082766+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:36.082965+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:37.083255+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:38.083469+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:39.083640+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:40.083852+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:41.084076+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:42.084267+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:43.084447+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:44.084679+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:45.084845+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:46.085142+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:47.085346+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:48.085506+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:49.085709+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:50.085895+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:51.086124+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:52.086340+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:53.086527+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:54.086720+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:55.086927+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:56.087147+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:57.087352+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:58.087535+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:59.087842+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:00.088127+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:01.088345+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:02.088502+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:03.088694+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.941345215s of 299.980346680s, submitted: 22
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:04.088898+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 704512 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:05.089029+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:06.089196+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:07.089400+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:08.089624+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:09.089824+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:10.089986+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:11.090206+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:12.090366+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:13.090551+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:14.090747+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:15.090905+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:16.091056+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:17.091239+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:18.091453+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:19.091641+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:20.091823+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:21.092062+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:22.092315+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:23.092690+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:24.092911+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:25.093088+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:26.093279+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:27.093512+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:28.093641+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:29.093760+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:30.093882+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:31.094028+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:32.094306+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:33.094709+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:34.094901+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:35.095054+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:36.095352+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:37.095627+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:38.095791+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:39.096019+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:40.096196+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:41.096443+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:42.096640+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:43.096728+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:44.096896+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:45.097057+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:46.097244+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:47.097395+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:48.097547+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:49.097734+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:50.097873+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:51.098051+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:52.098186+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:53.098346+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:54.098493+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:55.098617+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:56.098750+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:57.098930+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:58.099060+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:59.099202+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:00.099364+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:01.099591+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:02.099747+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:03.099894+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:04.100033+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:05.100189+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:06.100353+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:07.100498+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:08.100671+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:09.100833+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:10.101031+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:11.101236+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:12.101374+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:13.101516+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:14.101670+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:15.101803+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:16.101894+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:17.102013+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:18.102172+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:19.102385+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:20.102528+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:21.102740+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:22.102885+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:23.103014+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:24.103155+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:25.103301+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:26.103410+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:27.103587+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:28.103732+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:29.103866+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:30.104032+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:31.104268+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:32.104405+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:33.104620+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:34.104804+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:35.104974+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:36.105115+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:37.105261+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:38.105394+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:39.105515+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:40.105649+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:41.105847+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:42.106010+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:43.106166+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:44.106324+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:45.106643+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:46.106869+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:47.107085+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:48.107264+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:49.107456+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:50.107648+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:51.107864+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:52.108041+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:53.108203+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 647168 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:54.108408+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 647168 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:55.108611+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 647168 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:56.108773+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 647168 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:57.108924+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 647168 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:58.109098+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 647168 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:59.109320+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 647168 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:00.109468+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 647168 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:01.109661+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 638976 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:02.109807+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 638976 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:03.109972+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 638976 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:04.110403+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 638976 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:05.110740+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 638976 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:06.110875+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 638976 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:07.111003+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 638976 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:08.111124+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 638976 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:09.111312+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:10.111459+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:11.111675+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:12.111854+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:13.112005+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:14.112278+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:15.112634+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:16.112896+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:17.113231+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:18.113461+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:19.113631+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:20.170833+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:21.170988+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:22.171105+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:23.171217+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:24.171334+0000)
Jan 23 19:11:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 23 19:11:14 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4259935711' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:25.171469+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:26.171625+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:27.171797+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:28.171908+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:29.172186+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:30.172325+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:31.172676+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:32.172828+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:33.173047+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:34.173195+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:35.173341+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:36.173460+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:37.173621+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:38.173807+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:39.174011+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:40.174157+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:41.174394+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 622592 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:42.174527+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 622592 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:43.174700+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:44.174818+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:45.174986+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:46.175137+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:47.175354+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:48.175497+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:49.175711+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:50.175931+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:51.176127+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:52.176289+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:53.176485+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:54.176669+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:55.176817+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:56.176984+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:57.177204+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:58.177369+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:59.177516+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944adab800
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 175.401321411s of 176.169799805s, submitted: 90
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 581632 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:00.177814+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 516096 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:01.177956+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064240 data_alloc: 218103808 data_used: 11969
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 17137664 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:02.178126+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fc1d4000/0x0/0x4ffc00000, data 0xd96adb/0xe56000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 121 ms_handle_reset con 0x55944adab800 session 0x55944d579dc0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 17145856 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:03.178342+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944b2cb000
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 25403392 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:04.178629+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 122 ms_handle_reset con 0x55944b2cb000 session 0x55944cdd2540
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:05.178793+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:06.178957+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114499 data_alloc: 218103808 data_used: 12554
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:07.179114+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb9cc000/0x0/0x4ffc00000, data 0x159a27e/0x165e000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:08.179305+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:09.179460+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:10.179650+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb9cc000/0x0/0x4ffc00000, data 0x159a27e/0x165e000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.606542587s of 11.321042061s, submitted: 49
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:11.179852+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116825 data_alloc: 218103808 data_used: 12554
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:12.180490+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb9c9000/0x0/0x4ffc00000, data 0x159bcfd/0x1661000, compress 0x0/0x0/0x0, omap 0x13749, meta 0x2bbc8b7), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:13.180707+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:14.180892+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:15.181110+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:16.181305+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116825 data_alloc: 218103808 data_used: 12554
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:17.181629+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:18.181819+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb9c9000/0x0/0x4ffc00000, data 0x159bcfd/0x1661000, compress 0x0/0x0/0x0, omap 0x13749, meta 0x2bbc8b7), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:19.182723+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:20.182870+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:21.183036+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116825 data_alloc: 218103808 data_used: 12554
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:22.183171+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:23.183317+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb9c9000/0x0/0x4ffc00000, data 0x159bcfd/0x1661000, compress 0x0/0x0/0x0, omap 0x13749, meta 0x2bbc8b7), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944a35fc00
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.901450157s of 12.907519341s, submitted: 14
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb9c6000/0x0/0x4ffc00000, data 0x15a1420/0x1666000, compress 0x0/0x0/0x0, omap 0x13749, meta 0x2bbc8b7), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:24.183460+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 25018368 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:25.183613+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 86138880 unmapped: 23969792 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Got map version 10
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:26.183851+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 24043520 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117725 data_alloc: 218103808 data_used: 12554
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944d947c00
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:27.184040+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 23543808 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:28.184200+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb9b4000/0x0/0x4ffc00000, data 0x15b1ffe/0x1678000, compress 0x0/0x0/0x0, omap 0x1405b, meta 0x2bbbfa5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 23535616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:29.184316+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 23535616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:30.184458+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 23535616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:31.184614+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 87703552 unmapped: 22405120 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb9b1000/0x0/0x4ffc00000, data 0x15b5564/0x167b000, compress 0x0/0x0/0x0, omap 0x140f1, meta 0x2bbbf0f), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120291 data_alloc: 218103808 data_used: 12554
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:32.184745+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 87703552 unmapped: 22405120 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb9ae000/0x0/0x4ffc00000, data 0x15b89a2/0x167e000, compress 0x0/0x0/0x0, omap 0x141ff, meta 0x2bbbe01), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:33.184883+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 87703552 unmapped: 22405120 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Got map version 11
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:34.185034+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 87703552 unmapped: 22405120 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.349903107s of 10.990716934s, submitted: 44
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:35.185175+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 87957504 unmapped: 22151168 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:36.185379+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 87965696 unmapped: 22142976 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131167 data_alloc: 218103808 data_used: 12554
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:37.185605+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88031232 unmapped: 22077440 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb989000/0x0/0x4ffc00000, data 0x15d6495/0x16a1000, compress 0x0/0x0/0x0, omap 0x14b8d, meta 0x2bbb473), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:38.185736+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 22069248 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:39.185870+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 21864448 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb981000/0x0/0x4ffc00000, data 0x15dfbe0/0x16a9000, compress 0x0/0x0/0x0, omap 0x14c94, meta 0x2bbb36c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:40.186073+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 21815296 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:41.186258+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 21700608 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136811 data_alloc: 218103808 data_used: 12554
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb970000/0x0/0x4ffc00000, data 0x15ef5e6/0x16ba000, compress 0x0/0x0/0x0, omap 0x15310, meta 0x2bbacf0), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:42.186441+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88449024 unmapped: 21659648 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:43.186599+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb96d000/0x0/0x4ffc00000, data 0x15f5696/0x16bf000, compress 0x0/0x0/0x0, omap 0x15524, meta 0x2bbaadc), peers [0,2] op hist [0,1])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88498176 unmapped: 21610496 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:44.186735+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88498176 unmapped: 21610496 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:45.186875+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88498176 unmapped: 21610496 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.973920822s of 11.331918716s, submitted: 109
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:46.194390+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 21577728 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb967000/0x0/0x4ffc00000, data 0x15fb920/0x16c5000, compress 0x0/0x0/0x0, omap 0x157fa, meta 0x2bba806), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135247 data_alloc: 218103808 data_used: 12826
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:47.194600+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 21577728 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:48.194753+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 21577728 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:49.194981+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88563712 unmapped: 21544960 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:50.195119+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 21536768 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:51.195322+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 21446656 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb94c000/0x0/0x4ffc00000, data 0x161671b/0x16e0000, compress 0x0/0x0/0x0, omap 0x15f88, meta 0x2bba078), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138019 data_alloc: 218103808 data_used: 12826
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:52.195498+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88694784 unmapped: 21413888 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:53.195649+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88694784 unmapped: 21413888 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:54.195798+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88694784 unmapped: 21413888 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:55.196000+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 90832896 unmapped: 19275776 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb946000/0x0/0x4ffc00000, data 0x161cdb2/0x16e6000, compress 0x0/0x0/0x0, omap 0x15fec, meta 0x2bba014), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.473471642s of 10.000333786s, submitted: 42
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:56.196207+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 90857472 unmapped: 19251200 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138637 data_alloc: 218103808 data_used: 12826
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:57.196354+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 90857472 unmapped: 19251200 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:58.196526+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 90857472 unmapped: 19251200 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:59.196693+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 90939392 unmapped: 19169280 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fa798000/0x0/0x4ffc00000, data 0x162b501/0x16f4000, compress 0x0/0x0/0x0, omap 0x16532, meta 0x3d59ace), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:00.196915+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 90939392 unmapped: 19169280 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:01.197087+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91004928 unmapped: 19103744 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139877 data_alloc: 218103808 data_used: 12826
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:02.197232+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91095040 unmapped: 19013632 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:03.197372+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91095040 unmapped: 19013632 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0x1645296/0x1711000, compress 0x0/0x0/0x0, omap 0x16997, meta 0x3d59669), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:04.197535+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 18931712 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0x1646d55/0x1711000, compress 0x0/0x0/0x0, omap 0x16ba4, meta 0x3d5945c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:05.197711+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0x1646d55/0x1711000, compress 0x0/0x0/0x0, omap 0x16ba4, meta 0x3d5945c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 18931712 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.903821945s of 10.000028610s, submitted: 45
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:06.197864+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0x1646d55/0x1711000, compress 0x0/0x0/0x0, omap 0x16c24, meta 0x3d593dc), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91250688 unmapped: 18857984 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146959 data_alloc: 218103808 data_used: 12826
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:07.198032+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91258880 unmapped: 18849792 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:08.198196+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91267072 unmapped: 18841600 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:09.198412+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fa768000/0x0/0x4ffc00000, data 0x165810a/0x1722000, compress 0x0/0x0/0x0, omap 0x171b7, meta 0x3d58e49), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91267072 unmapped: 18841600 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fa768000/0x0/0x4ffc00000, data 0x165810a/0x1722000, compress 0x0/0x0/0x0, omap 0x1724b, meta 0x3d58db5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:10.198622+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91267072 unmapped: 18841600 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:11.198800+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 18784256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149511 data_alloc: 218103808 data_used: 12826
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:12.198998+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 18784256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:13.199168+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa75a000/0x0/0x4ffc00000, data 0x1666adf/0x1732000, compress 0x0/0x0/0x0, omap 0x175a3, meta 0x3d58a5d), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 18784256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:14.199293+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa757000/0x0/0x4ffc00000, data 0x1668d5d/0x1735000, compress 0x0/0x0/0x0, omap 0x177d0, meta 0x3d58830), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 18784256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:15.199478+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 18784256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.496050358s of 10.000307083s, submitted: 69
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:16.199619+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 18784256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149771 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:17.199760+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 18784256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:18.199908+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 18784256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:19.200040+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 18784256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa750000/0x0/0x4ffc00000, data 0x166f91b/0x173c000, compress 0x0/0x0/0x0, omap 0x178ae, meta 0x3d58752), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:20.200170+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91332608 unmapped: 18776064 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:21.200302+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91332608 unmapped: 18776064 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151271 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:22.200504+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91332608 unmapped: 18776064 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:23.200734+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91332608 unmapped: 18776064 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:24.201057+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91332608 unmapped: 18776064 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa740000/0x0/0x4ffc00000, data 0x167f54d/0x174c000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:25.201381+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 18751488 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.725258827s of 10.000419617s, submitted: 18
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:26.201590+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 18751488 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150591 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:27.201939+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 18751488 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa740000/0x0/0x4ffc00000, data 0x167f5a1/0x174c000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:28.202217+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 18751488 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:29.204034+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91381760 unmapped: 18726912 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:30.204321+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91381760 unmapped: 18726912 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:31.204473+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91430912 unmapped: 18677760 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:32.204633+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154717 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91439104 unmapped: 18669568 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa72c000/0x0/0x4ffc00000, data 0x1693bc1/0x1760000, compress 0x0/0x0/0x0, omap 0x17cf5, meta 0x3d5830b), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:33.204792+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91439104 unmapped: 18669568 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:34.204923+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa72c000/0x0/0x4ffc00000, data 0x1693bc1/0x1760000, compress 0x0/0x0/0x0, omap 0x17d8b, meta 0x3d58275), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91512832 unmapped: 18595840 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:35.205126+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91512832 unmapped: 18595840 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.611222267s of 10.000637054s, submitted: 43
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:36.205272+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 92864512 unmapped: 17244160 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:37.205495+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1164941 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94019584 unmapped: 16089088 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:38.205622+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 15851520 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:39.205763+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 15654912 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fa6d2000/0x0/0x4ffc00000, data 0x16ea838/0x17ba000, compress 0x0/0x0/0x0, omap 0x18d4b, meta 0x3d572b5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:40.205911+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 15687680 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:41.206115+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94437376 unmapped: 15671296 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:42.206303+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167829 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fa6cf000/0x0/0x4ffc00000, data 0x16efc1f/0x17bd000, compress 0x0/0x0/0x0, omap 0x18ec7, meta 0x3d57139), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94437376 unmapped: 15671296 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:43.206498+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 15695872 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:44.206628+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 15663104 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:45.206820+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 15663104 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.430392265s of 10.096447945s, submitted: 120
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:46.206948+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 129 handle_osd_map epochs [130,131], i have 129, src has [1,131]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 15572992 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:47.207083+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177921 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94633984 unmapped: 15474688 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:48.207297+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fa697000/0x0/0x4ffc00000, data 0x1722f69/0x17f3000, compress 0x0/0x0/0x0, omap 0x199d9, meta 0x3d56627), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94633984 unmapped: 15474688 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:49.207489+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 15622144 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:50.207669+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94625792 unmapped: 15482880 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:51.207845+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94633984 unmapped: 15474688 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:52.208098+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179695 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 15360000 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa66c000/0x0/0x4ffc00000, data 0x174ae6e/0x181c000, compress 0x0/0x0/0x0, omap 0x1a727, meta 0x3d558d9), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:53.208289+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 15360000 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa66c000/0x0/0x4ffc00000, data 0x174ae6e/0x181c000, compress 0x0/0x0/0x0, omap 0x1a727, meta 0x3d558d9), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:54.208431+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95895552 unmapped: 14213120 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:55.208655+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95748096 unmapped: 14360576 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:56.208858+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.514696121s of 10.278063774s, submitted: 174
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 14737408 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:57.209022+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181893 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 14737408 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:58.209180+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 14680064 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:59.209965+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa641000/0x0/0x4ffc00000, data 0x1779201/0x184b000, compress 0x0/0x0/0x0, omap 0x1ac7f, meta 0x3d55381), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95526912 unmapped: 14581760 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:00.210171+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95543296 unmapped: 14565376 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:01.210350+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 14548992 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:02.210488+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187163 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 14548992 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:03.210636+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa62e000/0x0/0x4ffc00000, data 0x1788e64/0x185c000, compress 0x0/0x0/0x0, omap 0x1b138, meta 0x3d54ec8), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 14548992 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa62a000/0x0/0x4ffc00000, data 0x178f3e1/0x1862000, compress 0x0/0x0/0x0, omap 0x1b138, meta 0x3d54ec8), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:04.210787+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 14327808 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:05.210990+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 14327808 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:06.211164+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95854592 unmapped: 14254080 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa623000/0x0/0x4ffc00000, data 0x1795be9/0x1869000, compress 0x0/0x0/0x0, omap 0x1b1ce, meta 0x3d54e32), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:07.211362+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185971 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95911936 unmapped: 14196736 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:08.211516+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95911936 unmapped: 14196736 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:09.211688+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.706715584s of 13.003004074s, submitted: 34
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 14163968 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:10.211886+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 14163968 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:11.212131+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 14319616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:12.212335+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186251 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 14319616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa61a000/0x0/0x4ffc00000, data 0x179eca7/0x1872000, compress 0x0/0x0/0x0, omap 0x1b264, meta 0x3d54d9c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:13.212525+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa61a000/0x0/0x4ffc00000, data 0x179eca7/0x1872000, compress 0x0/0x0/0x0, omap 0x1b264, meta 0x3d54d9c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 14319616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:14.212669+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95903744 unmapped: 14204928 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:15.212926+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95903744 unmapped: 14204928 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:16.213100+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95903744 unmapped: 14204928 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:17.213253+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186451 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95903744 unmapped: 14204928 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:18.213444+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95911936 unmapped: 14196736 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:19.213595+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa5fe000/0x0/0x4ffc00000, data 0x17bb0df/0x188e000, compress 0x0/0x0/0x0, omap 0x1b6a2, meta 0x3d5495e), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.553693771s of 10.001129150s, submitted: 18
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95911936 unmapped: 14196736 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:20.213792+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 96149504 unmapped: 13959168 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:21.213983+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 96313344 unmapped: 13795328 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:22.214149+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198723 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 12525568 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa5bb000/0x0/0x4ffc00000, data 0x17fbe9f/0x18d1000, compress 0x0/0x0/0x0, omap 0x1bc46, meta 0x3d543ba), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:23.214269+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12419072 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:24.214457+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa58d000/0x0/0x4ffc00000, data 0x1827d0d/0x18fe000, compress 0x0/0x0/0x0, omap 0x1c152, meta 0x3d53eae), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97599488 unmapped: 12509184 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:25.214620+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97542144 unmapped: 12566528 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:26.214775+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12443648 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa53c000/0x0/0x4ffc00000, data 0x1878476/0x1950000, compress 0x0/0x0/0x0, omap 0x1c5c6, meta 0x3d53a3a), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:27.214947+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214687 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12230656 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:28.215092+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa50e000/0x0/0x4ffc00000, data 0x18a3c15/0x197b000, compress 0x0/0x0/0x0, omap 0x1cc13, meta 0x3d533ed), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97296384 unmapped: 12812288 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:29.215259+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.288331032s of 10.002779007s, submitted: 127
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 12640256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:30.215457+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 12640256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:31.216219+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa4ee000/0x0/0x4ffc00000, data 0x18c630d/0x199e000, compress 0x0/0x0/0x0, omap 0x1cdbd, meta 0x3d53243), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 135 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97542144 unmapped: 12566528 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa4ee000/0x0/0x4ffc00000, data 0x18c630d/0x199e000, compress 0x0/0x0/0x0, omap 0x1cdbd, meta 0x3d53243), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:32.216527+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212551 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 11468800 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:33.216782+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 11468800 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:34.216950+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 11296768 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:35.217159+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 11296768 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa4d2000/0x0/0x4ffc00000, data 0x18dcf5a/0x19b7000, compress 0x0/0x0/0x0, omap 0x1d283, meta 0x3d52d7d), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:36.217383+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa4d0000/0x0/0x4ffc00000, data 0x18dec61/0x19b9000, compress 0x0/0x0/0x0, omap 0x1d283, meta 0x3d52d7d), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 11296768 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:37.217549+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218027 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa4d0000/0x0/0x4ffc00000, data 0x18dec61/0x19b9000, compress 0x0/0x0/0x0, omap 0x1d283, meta 0x3d52d7d), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 11296768 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:38.217697+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98443264 unmapped: 11665408 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:39.217850+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98443264 unmapped: 11665408 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:40.217994+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.923088074s of 11.076400757s, submitted: 38
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 11550720 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:41.218146+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 11550720 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:42.218271+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218883 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 11550720 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa4a8000/0x0/0x4ffc00000, data 0x190a147/0x19e4000, compress 0x0/0x0/0x0, omap 0x1d4db, meta 0x3d52b25), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:43.218397+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 11370496 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:44.218631+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa4a8000/0x0/0x4ffc00000, data 0x190a0e5/0x19e3000, compress 0x0/0x0/0x0, omap 0x1d571, meta 0x3d52a8f), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 11231232 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:45.218844+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98885632 unmapped: 11223040 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:46.219005+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa479000/0x0/0x4ffc00000, data 0x1938d69/0x1a11000, compress 0x0/0x0/0x0, omap 0x1d8f5, meta 0x3d5270b), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98304000 unmapped: 11804672 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:47.219198+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218923 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98304000 unmapped: 11804672 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:48.219327+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98394112 unmapped: 11714560 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:49.219498+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98615296 unmapped: 11493376 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:50.219631+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98615296 unmapped: 11493376 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.679545403s of 10.544363976s, submitted: 67
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:51.219926+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 9756672 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x19b512c/0x1a8e000, compress 0x0/0x0/0x0, omap 0x1bff2, meta 0x3d5400e), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:52.220126+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228769 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x19b512c/0x1a8e000, compress 0x0/0x0/0x0, omap 0x1bff2, meta 0x3d5400e), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100196352 unmapped: 9912320 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:53.220306+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100196352 unmapped: 9912320 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:54.220449+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 9895936 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa3fb000/0x0/0x4ffc00000, data 0x19b85d4/0x1a91000, compress 0x0/0x0/0x0, omap 0x1be97, meta 0x3d54169), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:55.220651+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100253696 unmapped: 9854976 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:56.220878+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 9846784 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:57.221017+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa3dd000/0x0/0x4ffc00000, data 0x19d5af4/0x1aaf000, compress 0x0/0x0/0x0, omap 0x1be97, meta 0x3d54169), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226787 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 9846784 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:58.221173+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 10354688 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:59.221346+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 10354688 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:00.221481+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 99950592 unmapped: 10158080 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.632919312s of 10.113967896s, submitted: 56
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:01.221725+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100122624 unmapped: 9986048 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:02.221960+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229779 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100122624 unmapped: 9986048 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa3a2000/0x0/0x4ffc00000, data 0x1a11b1f/0x1aea000, compress 0x0/0x0/0x0, omap 0x1bfc3, meta 0x3d5403d), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:03.222118+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 9887744 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:04.222254+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 9674752 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:05.222395+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 9166848 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:06.222507+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa374000/0x0/0x4ffc00000, data 0x1a3f2d9/0x1b18000, compress 0x0/0x0/0x0, omap 0x1bfc3, meta 0x3d5403d), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 9142272 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:07.222636+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232647 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 8962048 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:08.222959+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa354000/0x0/0x4ffc00000, data 0x1a5e928/0x1b37000, compress 0x0/0x0/0x0, omap 0x1bfc3, meta 0x3d5403d), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 8962048 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:09.223114+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 101376000 unmapped: 8732672 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:10.223331+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 101572608 unmapped: 8536064 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:11.223540+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.970051765s of 10.277941704s, submitted: 79
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 101572608 unmapped: 8536064 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:12.223703+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa310000/0x0/0x4ffc00000, data 0x1aa1462/0x1b7c000, compress 0x0/0x0/0x0, omap 0x1bfc3, meta 0x3d5403d), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240981 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 8298496 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:13.223843+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 101236736 unmapped: 8871936 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:14.223986+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944d948000
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 8691712 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944d949c00
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:15.224112+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Got map version 12
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 7192576 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:16.224316+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 103161856 unmapped: 6946816 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa2a4000/0x0/0x4ffc00000, data 0x1b0ac75/0x1be8000, compress 0x0/0x0/0x0, omap 0x1c0ef, meta 0x3d53f11), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:17.224510+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252477 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 6914048 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:18.224691+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 6914048 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:19.224853+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 103358464 unmapped: 6750208 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:20.225039+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 6545408 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:21.225230+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa26d000/0x0/0x4ffc00000, data 0x1b464ec/0x1c1f000, compress 0x0/0x0/0x0, omap 0x1c0ef, meta 0x3d53f11), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.763832092s of 10.002084732s, submitted: 85
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 6537216 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:22.225386+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252385 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 7151616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:23.225635+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 7151616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:24.225816+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 7151616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:25.225966+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 7151616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:26.226121+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa243000/0x0/0x4ffc00000, data 0x1b6ce12/0x1c47000, compress 0x0/0x0/0x0, omap 0x1c444, meta 0x3d53bbc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 103161856 unmapped: 6946816 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa22e000/0x0/0x4ffc00000, data 0x1b83bd0/0x1c5e000, compress 0x0/0x0/0x0, omap 0x1c444, meta 0x3d53bbc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:27.226293+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254653 data_alloc: 218103808 data_used: 13249
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 103161856 unmapped: 6946816 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:28.226487+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 7364608 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:29.226638+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 7364608 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:30.226843+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 7364608 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:31.227080+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.163558006s of 10.366992950s, submitted: 50
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 102825984 unmapped: 7282688 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:32.227253+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa210000/0x0/0x4ffc00000, data 0x1b9e8d3/0x1c7a000, compress 0x0/0x0/0x0, omap 0x1c720, meta 0x3d538e0), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262007 data_alloc: 218103808 data_used: 13249
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104185856 unmapped: 5922816 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:33.227452+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 5906432 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:34.227729+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104177664 unmapped: 5931008 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:35.227974+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 5906432 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:36.228172+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104226816 unmapped: 5881856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:37.228370+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa1c6000/0x0/0x4ffc00000, data 0x1beba98/0x1cc6000, compress 0x0/0x0/0x0, omap 0x1c720, meta 0x3d538e0), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262801 data_alloc: 218103808 data_used: 13253
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 6062080 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa1c7000/0x0/0x4ffc00000, data 0x1beba62/0x1cc5000, compress 0x0/0x0/0x0, omap 0x1c720, meta 0x3d538e0), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:38.228595+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104095744 unmapped: 6012928 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:39.228778+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa1b3000/0x0/0x4ffc00000, data 0x1bffcb3/0x1cd9000, compress 0x0/0x0/0x0, omap 0x1c720, meta 0x3d538e0), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104103936 unmapped: 6004736 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:40.228928+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104210432 unmapped: 5898240 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:41.229067+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.892628670s of 10.119803429s, submitted: 95
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 5808128 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:42.229197+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267505 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 5808128 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:43.229378+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa181000/0x0/0x4ffc00000, data 0x1c31283/0x1d0b000, compress 0x0/0x0/0x0, omap 0x1ccc2, meta 0x3d5333e), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 5890048 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:44.229755+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa181000/0x0/0x4ffc00000, data 0x1c31283/0x1d0b000, compress 0x0/0x0/0x0, omap 0x1ccc2, meta 0x3d5333e), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 5890048 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:45.230032+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104284160 unmapped: 5824512 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:46.230228+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104284160 unmapped: 5824512 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:47.230405+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266231 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104284160 unmapped: 5824512 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:48.230589+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104284160 unmapped: 5824512 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:49.230773+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 5791744 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:50.230931+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa161000/0x0/0x4ffc00000, data 0x1c4e329/0x1d29000, compress 0x0/0x0/0x0, omap 0x1cf87, meta 0x3d53079), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 5750784 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:51.231107+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 5750784 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:52.231256+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273061 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104513536 unmapped: 5595136 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:53.231414+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.602644920s of 11.791436195s, submitted: 71
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104513536 unmapped: 5595136 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:54.231633+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa139000/0x0/0x4ffc00000, data 0x1c74ddd/0x1d51000, compress 0x0/0x0/0x0, omap 0x1d337, meta 0x3d52cc9), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105463808 unmapped: 4644864 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:55.231779+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 4579328 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:56.231932+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 4579328 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:57.232148+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280915 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 4497408 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:58.232348+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105676800 unmapped: 4431872 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:59.232526+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105676800 unmapped: 4431872 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:00.232667+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa0fd000/0x0/0x4ffc00000, data 0x1cad3eb/0x1d8c000, compress 0x0/0x0/0x0, omap 0x1d5e4, meta 0x3d52a1c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105676800 unmapped: 4431872 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:01.232857+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105029632 unmapped: 5079040 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:02.233028+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280491 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105029632 unmapped: 5079040 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:03.233253+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa0cf000/0x0/0x4ffc00000, data 0x1cde664/0x1dbd000, compress 0x0/0x0/0x0, omap 0x1d5e4, meta 0x3d52a1c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa0cf000/0x0/0x4ffc00000, data 0x1cde664/0x1dbd000, compress 0x0/0x0/0x0, omap 0x1d5e4, meta 0x3d52a1c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 5046272 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:04.233440+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.194561958s of 11.065369606s, submitted: 60
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 4907008 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:05.233633+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 4907008 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:06.233827+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 4907008 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:07.234009+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284335 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa0a9000/0x0/0x4ffc00000, data 0x1d038b2/0x1de2000, compress 0x0/0x0/0x0, omap 0x1d5e4, meta 0x3d52a1c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 4907008 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:08.234149+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa0a9000/0x0/0x4ffc00000, data 0x1d038b2/0x1de2000, compress 0x0/0x0/0x0, omap 0x1d5e4, meta 0x3d52a1c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105390080 unmapped: 4718592 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:09.234339+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:10.234494+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105390080 unmapped: 4718592 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:11.234627+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 3465216 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:12.234847+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 3457024 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa06a000/0x0/0x4ffc00000, data 0x1d42237/0x1e22000, compress 0x0/0x0/0x0, omap 0x1d5e4, meta 0x3d52a1c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1290623 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:13.235024+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 3457024 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:14.235136+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 3252224 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.547468185s of 10.381765366s, submitted: 62
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:15.235251+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 3104768 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa04b000/0x0/0x4ffc00000, data 0x1d60a71/0x1e40000, compress 0x0/0x0/0x0, omap 0x1d5e4, meta 0x3d52a1c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:16.235444+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 2940928 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:17.235649+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 3022848 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288001 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:18.235866+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 3022848 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:19.236076+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107094016 unmapped: 3014656 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa002000/0x0/0x4ffc00000, data 0x1dabc5b/0x1e8a000, compress 0x0/0x0/0x0, omap 0x1da84, meta 0x3d5257c), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:20.236270+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 2809856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:21.236656+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 2695168 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:22.236873+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 2695168 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304871 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9f82000/0x0/0x4ffc00000, data 0x1e2780a/0x1f08000, compress 0x0/0x0/0x0, omap 0x1da84, meta 0x3d5257c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:23.237000+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 1859584 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:24.237183+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 1859584 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:25.237382+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 1859584 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.868585587s of 10.334239960s, submitted: 88
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:26.237500+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 2670592 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:27.237671+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 2670592 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311337 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:28.237800+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9f08000/0x0/0x4ffc00000, data 0x1ea1267/0x1f83000, compress 0x0/0x0/0x0, omap 0x1da84, meta 0x3d5257c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 2555904 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:29.237950+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 3440640 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:30.238114+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 3440640 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 ms_handle_reset con 0x55944d948000 session 0x55944acbea80
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 ms_handle_reset con 0x55944d949c00 session 0x55944d3ee8c0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9eff000/0x0/0x4ffc00000, data 0x1eacb77/0x1f8d000, compress 0x0/0x0/0x0, omap 0x1da84, meta 0x3d5257c), peers [0,2] op hist [0,0,0,0,0,3])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:31.238416+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 109543424 unmapped: 1613824 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9edd000/0x0/0x4ffc00000, data 0x1ecbeb2/0x1fad000, compress 0x0/0x0/0x0, omap 0x1da84, meta 0x3d5257c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:32.238690+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 1261568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Got map version 13
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310895 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:33.238884+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 1171456 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:34.239156+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 1220608 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9e8d000/0x0/0x4ffc00000, data 0x1f1b991/0x1ffe000, compress 0x0/0x0/0x0, omap 0x1da84, meta 0x3d5257c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:35.239340+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 925696 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.739799500s of 10.003137589s, submitted: 314
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:36.239595+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 925696 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:37.239712+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 876544 heap: 112205824 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323303 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:38.239835+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 548864 heap: 112205824 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9ded000/0x0/0x4ffc00000, data 0x1fbb818/0x209d000, compress 0x0/0x0/0x0, omap 0x1da84, meta 0x3d5257c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:39.240158+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 540672 heap: 112205824 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:40.240293+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 112713728 unmapped: 540672 heap: 113254400 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:41.240703+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 475136 heap: 113254400 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:42.240851+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 1507328 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9d9a000/0x0/0x4ffc00000, data 0x200e8f4/0x20f0000, compress 0x0/0x0/0x0, omap 0x1ed04, meta 0x3d512fc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329299 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:43.240993+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 1507328 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:44.241106+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 1507328 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:45.241227+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 112885760 unmapped: 1417216 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.323970795s of 10.002117157s, submitted: 111
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:46.241356+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 262144 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:47.241601+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 122880 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325725 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-mon[75097]: pgmap v1257: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 23 19:11:14 compute-0 ceph-mon[75097]: from='client.14586 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:14 compute-0 ceph-mon[75097]: from='client.14590 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:14 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/971006246' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 23 19:11:14 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4259935711' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9d60000/0x0/0x4ffc00000, data 0x204b38f/0x212c000, compress 0x0/0x0/0x0, omap 0x1ed04, meta 0x3d512fc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:48.241756+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 122880 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:49.241897+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 122880 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:50.242081+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 122880 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:51.242320+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 1097728 heap: 115351552 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:52.242462+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 1097728 heap: 115351552 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331523 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:53.242628+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 1630208 heap: 115351552 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9ced000/0x0/0x4ffc00000, data 0x20c02b3/0x219f000, compress 0x0/0x0/0x0, omap 0x1ed04, meta 0x3d512fc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:54.242737+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 1597440 heap: 115351552 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:55.242856+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 1597440 heap: 115351552 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.663146019s of 10.544254303s, submitted: 83
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:56.242984+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 1531904 heap: 115351552 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:57.243193+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 1531904 heap: 116400128 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343355 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f8ae0000/0x0/0x4ffc00000, data 0x2129bb0/0x220b000, compress 0x0/0x0/0x0, omap 0x1ed04, meta 0x4ef12fc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:58.243320+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 2580480 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:59.243682+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 115105792 unmapped: 2342912 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:00.243778+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 115105792 unmapped: 2342912 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:01.243935+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 2072576 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f8a97000/0x0/0x4ffc00000, data 0x21736d7/0x2254000, compress 0x0/0x0/0x0, omap 0x1ed04, meta 0x4ef12fc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:02.244059+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 1892352 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339227 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:03.244170+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 115564544 unmapped: 1884160 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:04.244339+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 115564544 unmapped: 1884160 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:05.244477+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 1933312 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.875925064s of 10.000294685s, submitted: 74
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:06.244646+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f8a5d000/0x0/0x4ffc00000, data 0x21adb23/0x228e000, compress 0x0/0x0/0x0, omap 0x1ed04, meta 0x4ef12fc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 860160 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:07.244854+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 860160 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341421 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:08.245041+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 786432 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:09.245124+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 778240 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:10.245279+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 778240 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:11.245532+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 778240 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ab000/0x0/0x4ffc00000, data 0x21bf83a/0x229f000, compress 0x0/0x0/0x0, omap 0x22484, meta 0x608db7c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:12.245783+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 770048 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343289 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:13.245924+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 770048 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:14.246149+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 753664 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21bf86d/0x229e000, compress 0x0/0x0/0x0, omap 0x22484, meta 0x608db7c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:15.246280+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 753664 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21bf86d/0x229e000, compress 0x0/0x0/0x0, omap 0x22484, meta 0x608db7c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.615035057s of 10.021672249s, submitted: 32
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:16.246388+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 753664 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:17.246628+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 753664 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342715 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:18.246744+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 753664 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:19.246892+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 745472 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:20.247102+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 745472 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ae000/0x0/0x4ffc00000, data 0x21bf969/0x229e000, compress 0x0/0x0/0x0, omap 0x225ac, meta 0x608da54), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:21.247257+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 745472 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:22.247380+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 745472 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341597 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:23.247515+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 745472 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ad000/0x0/0x4ffc00000, data 0x21bf937/0x229e000, compress 0x0/0x0/0x0, omap 0x22321, meta 0x608dcdf), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:24.247665+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 745472 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:25.247811+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 745472 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:26.247996+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.763580322s of 10.252605438s, submitted: 23
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117760000 unmapped: 737280 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:27.248166+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117760000 unmapped: 737280 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343145 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:28.248317+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117760000 unmapped: 737280 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ab000/0x0/0x4ffc00000, data 0x21bfb5e/0x229f000, compress 0x0/0x0/0x0, omap 0x23561, meta 0x608ca9f), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ab000/0x0/0x4ffc00000, data 0x21bfb5e/0x229f000, compress 0x0/0x0/0x0, omap 0x23561, meta 0x608ca9f), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:29.248497+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117760000 unmapped: 737280 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:30.248619+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117760000 unmapped: 737280 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ad000/0x0/0x4ffc00000, data 0x21bfc28/0x229f000, compress 0x0/0x0/0x0, omap 0x23561, meta 0x608ca9f), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:31.248773+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117760000 unmapped: 737280 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:32.248900+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117768192 unmapped: 729088 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342523 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:33.249036+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 720896 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:34.249284+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 720896 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ad000/0x0/0x4ffc00000, data 0x21bfbc3/0x229e000, compress 0x0/0x0/0x0, omap 0x23685, meta 0x608c97b), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:35.249417+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 720896 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:36.249644+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 720896 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.369353294s of 10.191442490s, submitted: 32
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:37.249761+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 720896 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1344071 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:38.249940+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ad000/0x0/0x4ffc00000, data 0x21bfcc3/0x229f000, compress 0x0/0x0/0x0, omap 0x237a9, meta 0x608c857), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117784576 unmapped: 712704 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:39.250130+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21bfcfa/0x229f000, compress 0x0/0x0/0x0, omap 0x237a9, meta 0x608c857), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 679936 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:40.250311+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 679936 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:41.250605+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 671744 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:42.250766+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 671744 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21bfd2d/0x229f000, compress 0x0/0x0/0x0, omap 0x237a9, meta 0x608c857), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1344103 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:43.250983+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 630784 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:44.251279+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 630784 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:45.251496+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 630784 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:46.251649+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.359336853s of 10.002920151s, submitted: 29
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 630784 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:47.251871+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 630784 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345189 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:48.252095+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 630784 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21bfec0/0x229f000, compress 0x0/0x0/0x0, omap 0x249e9, meta 0x608b617), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:49.252292+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 630784 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21bfec0/0x229f000, compress 0x0/0x0/0x0, omap 0x249e9, meta 0x608b617), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:50.252545+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 630784 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:51.252837+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 622592 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:52.253050+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 622592 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ad000/0x0/0x4ffc00000, data 0x21bff87/0x229e000, compress 0x0/0x0/0x0, omap 0x249e9, meta 0x608b617), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:53.253258+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1344503 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 622592 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:54.253469+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ad000/0x0/0x4ffc00000, data 0x21bff87/0x229e000, compress 0x0/0x0/0x0, omap 0x249e9, meta 0x608b617), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 622592 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:55.253687+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 1654784 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:56.253836+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3238 syncs, 3.51 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4350 writes, 15K keys, 4350 commit groups, 1.0 writes per commit group, ingest: 20.29 MB, 0.03 MB/s
                                           Interval WAL: 4350 writes, 1874 syncs, 2.32 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 1646592 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.196742058s of 10.349185944s, submitted: 31
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:57.253976+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 1638400 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:58.254180+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345843 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 1638400 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21c00b3/0x229f000, compress 0x0/0x0/0x0, omap 0x24b0d, meta 0x608b4f3), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:59.254351+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 1622016 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21c00b3/0x229f000, compress 0x0/0x0/0x0, omap 0x24b0d, meta 0x608b4f3), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21c00b3/0x229f000, compress 0x0/0x0/0x0, omap 0x24b0d, meta 0x608b4f3), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:00.254490+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 1622016 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:01.254706+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 1622016 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:02.254865+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 1622016 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 ms_handle_reset con 0x559449cd6800 session 0x559448b4c000
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944adab800
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ad000/0x0/0x4ffc00000, data 0x21c0083/0x229e000, compress 0x0/0x0/0x0, omap 0x2af24, meta 0x60850dc), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:03.255110+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347249 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 1622016 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc ms_handle_reset ms_handle_reset con 0x55944a35f400
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4161918394
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: get_auth_request con 0x55944d8fac00 auth_method 0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc handle_mgr_configure stats_period=5
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:04.255253+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 ms_handle_reset con 0x55944adaa800 session 0x55944c8f88c0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944d17b400
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 ms_handle_reset con 0x55944a48a000 session 0x55944d190fc0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944adaa000
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 1638400 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:05.255491+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 1638400 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:06.255650+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 1638400 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:07.255805+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 1638400 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:08.255977+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346801 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ad000/0x0/0x4ffc00000, data 0x21c011d/0x229f000, compress 0x0/0x0/0x0, omap 0x2b446, meta 0x6084bba), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.499647141s of 11.616078377s, submitted: 24
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 1638400 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:09.256183+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 1638400 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:10.256343+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 1638400 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:11.256513+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ad000/0x0/0x4ffc00000, data 0x21c0151/0x229f000, compress 0x0/0x0/0x0, omap 0x2b720, meta 0x60848e0), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 1630208 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:12.256694+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 1597440 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:13.256815+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348111 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 1597440 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:14.256955+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 1597440 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:15.257117+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 1597440 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78aa000/0x0/0x4ffc00000, data 0x21c0278/0x229f000, compress 0x0/0x0/0x0, omap 0x2bc42, meta 0x60843be), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:16.257281+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 1597440 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:17.257520+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 1597440 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21c02ed/0x22a0000, compress 0x0/0x0/0x0, omap 0x2bd1d, meta 0x60842e3), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 142 handle_osd_map epochs [143,143], i have 143, src has [1,143]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:18.257693+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352801 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 1597440 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.166779518s of 10.339650154s, submitted: 21
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:19.257871+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f78a7000/0x0/0x4ffc00000, data 0x21c1ef2/0x22a3000, compress 0x0/0x0/0x0, omap 0x2c10b, meta 0x6083ef5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 1581056 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:20.258018+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f78a7000/0x0/0x4ffc00000, data 0x21c1ef2/0x22a3000, compress 0x0/0x0/0x0, omap 0x2c39c, meta 0x6083c64), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 1581056 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:21.258221+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f78a7000/0x0/0x4ffc00000, data 0x21c1f57/0x22a3000, compress 0x0/0x0/0x0, omap 0x2c39c, meta 0x6083c64), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 1548288 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:22.258424+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 1548288 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:23.258604+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352401 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 1531904 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:24.258743+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 1531904 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:25.258868+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f78a8000/0x0/0x4ffc00000, data 0x21c1f46/0x22a3000, compress 0x0/0x0/0x0, omap 0x2c79a, meta 0x6083866), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 1531904 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:26.259068+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 1531904 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:27.259211+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118022144 unmapped: 1523712 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:28.259374+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352481 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118022144 unmapped: 1523712 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:29.259517+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.936616898s of 10.705034256s, submitted: 74
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 1531904 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f78a4000/0x0/0x4ffc00000, data 0x21c3afd/0x22a5000, compress 0x0/0x0/0x0, omap 0x2d08a, meta 0x6082f76), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:30.259658+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 1531904 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:31.260137+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f78a8000/0x0/0x4ffc00000, data 0x21c3b91/0x22a4000, compress 0x0/0x0/0x0, omap 0x2d43f, meta 0x6082bc1), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:32.260392+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:33.260618+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354649 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:34.260808+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f78a7000/0x0/0x4ffc00000, data 0x21c3bc4/0x22a4000, compress 0x0/0x0/0x0, omap 0x2d719, meta 0x60828e7), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:35.260970+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:36.261154+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:37.261296+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:38.261482+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356629 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f78a6000/0x0/0x4ffc00000, data 0x21c3dc0/0x22a6000, compress 0x0/0x0/0x0, omap 0x2dc3b, meta 0x60823c5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:39.261618+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.980519295s of 10.002510071s, submitted: 24
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:40.261800+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:41.262010+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:42.262199+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f78a2000/0x0/0x4ffc00000, data 0x21c3eba/0x22a7000, compress 0x0/0x0/0x0, omap 0x2df5e, meta 0x60820a2), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:43.262374+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1358321 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:44.262519+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944a48a000
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118079488 unmapped: 1466368 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:45.262652+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118087680 unmapped: 1458176 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:46.262824+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118087680 unmapped: 1458176 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:47.262970+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Got map version 14
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118177792 unmapped: 1368064 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f78a4000/0x0/0x4ffc00000, data 0x21c40ba/0x22a8000, compress 0x0/0x0/0x0, omap 0x2e512, meta 0x6081aee), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:48.263108+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362463 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118185984 unmapped: 1359872 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:49.263295+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.368686676s of 10.002033234s, submitted: 38
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118185984 unmapped: 1359872 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:50.263442+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118194176 unmapped: 1351680 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:51.263675+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118202368 unmapped: 1343488 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:52.263822+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118218752 unmapped: 1327104 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:53.263980+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1372135 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7878000/0x0/0x4ffc00000, data 0x21ed6df/0x22d2000, compress 0x0/0x0/0x0, omap 0x2ec33, meta 0x60813cd), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 1556480 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:54.264224+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 1531904 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 ms_handle_reset con 0x55944a4b8000 session 0x55944b310540
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944d949800
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:55.264391+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 1507328 heap: 120594432 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:56.264545+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7827000/0x0/0x4ffc00000, data 0x223fe5c/0x2324000, compress 0x0/0x0/0x0, omap 0x2eda0, meta 0x6081260), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 119472128 unmapped: 1122304 heap: 120594432 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f77fb000/0x0/0x4ffc00000, data 0x226cebf/0x2351000, compress 0x0/0x0/0x0, omap 0x2ee32, meta 0x60811ce), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:57.264743+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f77fb000/0x0/0x4ffc00000, data 0x226cebf/0x2351000, compress 0x0/0x0/0x0, omap 0x2ee32, meta 0x60811ce), peers [0,2] op hist [0,0,0,0,0,0,0,0,1,1])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 119480320 unmapped: 1114112 heap: 120594432 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f77ef000/0x0/0x4ffc00000, data 0x2279b91/0x235d000, compress 0x0/0x0/0x0, omap 0x2eec4, meta 0x608113c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:58.265026+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1378845 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 119619584 unmapped: 974848 heap: 120594432 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:59.265200+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.478974342s of 10.002267838s, submitted: 104
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 1933312 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:00.265367+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 1933312 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:01.265543+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 950272 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:02.265686+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 121036800 unmapped: 606208 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:03.265909+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384765 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 121036800 unmapped: 606208 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7732000/0x0/0x4ffc00000, data 0x233650c/0x241a000, compress 0x0/0x0/0x0, omap 0x2f677, meta 0x6080989), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:04.266067+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 120987648 unmapped: 655360 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:05.266291+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 1368064 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:06.266438+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 120414208 unmapped: 2277376 heap: 122691584 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:07.266603+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 120209408 unmapped: 2482176 heap: 122691584 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f76bf000/0x0/0x4ffc00000, data 0x23a68fb/0x248c000, compress 0x0/0x0/0x0, omap 0x2fd98, meta 0x6080268), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:08.266802+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1394903 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 2105344 heap: 122691584 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:09.266983+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.516779900s of 10.085855484s, submitted: 151
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 121659392 unmapped: 1032192 heap: 122691584 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:10.267138+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 121692160 unmapped: 2048000 heap: 123740160 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:11.267342+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 2580480 heap: 123740160 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:12.267489+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 121217024 unmapped: 2523136 heap: 123740160 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f764b000/0x0/0x4ffc00000, data 0x241c021/0x2501000, compress 0x0/0x0/0x0, omap 0x301df, meta 0x607fe21), peers [0,2] op hist [0,0,0,0,0,0,3])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:13.267697+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406365 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 2424832 heap: 123740160 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:14.271666+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 121634816 unmapped: 2105344 heap: 123740160 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f75f4000/0x0/0x4ffc00000, data 0x2471cde/0x2557000, compress 0x0/0x0/0x0, omap 0x3034c, meta 0x607fcb4), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:15.271804+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f75f4000/0x0/0x4ffc00000, data 0x2471cde/0x2557000, compress 0x0/0x0/0x0, omap 0x3034c, meta 0x607fcb4), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 121634816 unmapped: 2105344 heap: 123740160 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:16.272106+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122429440 unmapped: 2359296 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:17.272239+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7598000/0x0/0x4ffc00000, data 0x24ce02c/0x25b3000, compress 0x0/0x0/0x0, omap 0x304b9, meta 0x607fb47), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 1966080 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:18.272381+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401013 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 1966080 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:19.272523+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 1966080 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:20.272705+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7599000/0x0/0x4ffc00000, data 0x24ce091/0x25b3000, compress 0x0/0x0/0x0, omap 0x30626, meta 0x607f9da), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.475150108s of 10.776302338s, submitted: 134
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123215872 unmapped: 1572864 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:21.272881+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123215872 unmapped: 1572864 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:22.273083+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123215872 unmapped: 1572864 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f755e000/0x0/0x4ffc00000, data 0x250a6bf/0x25ee000, compress 0x0/0x0/0x0, omap 0x30ab6, meta 0x607f54a), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:23.273255+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1404895 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122552320 unmapped: 2236416 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:24.273424+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122552320 unmapped: 2236416 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:25.273583+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f755e000/0x0/0x4ffc00000, data 0x250a6bf/0x25ee000, compress 0x0/0x0/0x0, omap 0x30b48, meta 0x607f4b8), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 3104768 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:26.273786+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7536000/0x0/0x4ffc00000, data 0x2532350/0x2616000, compress 0x0/0x0/0x0, omap 0x30b48, meta 0x607f4b8), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 3014656 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:27.273946+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 3006464 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:28.274089+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405687 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 3006464 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:29.274255+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122986496 unmapped: 2850816 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:30.274397+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.857073784s of 10.541455269s, submitted: 34
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122986496 unmapped: 2850816 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:31.274634+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f751d000/0x0/0x4ffc00000, data 0x254b608/0x262f000, compress 0x0/0x0/0x0, omap 0x30eb4, meta 0x607f14c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122986496 unmapped: 2850816 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:32.274856+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123068416 unmapped: 2768896 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:33.274998+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405681 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 2686976 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:34.275168+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123183104 unmapped: 2654208 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:35.275298+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123183104 unmapped: 2654208 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7505000/0x0/0x4ffc00000, data 0x2564939/0x2647000, compress 0x0/0x0/0x0, omap 0x31220, meta 0x607ede0), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:36.275437+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 2457600 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:37.275656+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 2514944 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:38.275819+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409723 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 2424832 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:39.275956+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f74ec000/0x0/0x4ffc00000, data 0x257a929/0x265e000, compress 0x0/0x0/0x0, omap 0x31127, meta 0x607eed9), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 2424832 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:40.276117+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 2424832 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.260945320s of 10.114643097s, submitted: 44
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:41.276401+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 1359872 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:42.276614+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f74df000/0x0/0x4ffc00000, data 0x2589e89/0x266d000, compress 0x0/0x0/0x0, omap 0x31127, meta 0x607eed9), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 1359872 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:43.276750+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412961 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 1335296 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:44.277078+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 1269760 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:45.277287+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f74da000/0x0/0x4ffc00000, data 0x258b908/0x2670000, compress 0x0/0x0/0x0, omap 0x3149e, meta 0x607eb62), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f74da000/0x0/0x4ffc00000, data 0x258b908/0x2670000, compress 0x0/0x0/0x0, omap 0x3149e, meta 0x607eb62), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 1269760 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:46.277476+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 1679360 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:47.277664+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 1679360 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:48.277817+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412961 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 1679360 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:49.278001+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f74ca000/0x0/0x4ffc00000, data 0x259d6bc/0x2682000, compress 0x0/0x0/0x0, omap 0x31bd0, meta 0x607e430), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 2727936 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:50.278182+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 2654208 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.887678146s of 10.001625061s, submitted: 29
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:51.278379+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 2654208 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:52.278935+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 2654208 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:53.279120+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f74bb000/0x0/0x4ffc00000, data 0x25ac412/0x2691000, compress 0x0/0x0/0x0, omap 0x31d3d, meta 0x607e2c3), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412345 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f74bb000/0x0/0x4ffc00000, data 0x25ac412/0x2691000, compress 0x0/0x0/0x0, omap 0x31d3d, meta 0x607e2c3), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 2555904 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:54.279252+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 2555904 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:55.279398+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 2555904 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:56.279555+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124559360 unmapped: 2326528 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:57.279819+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f7475000/0x0/0x4ffc00000, data 0x25f281e/0x26d7000, compress 0x0/0x0/0x0, omap 0x3213b, meta 0x607dec5), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 2318336 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:58.279992+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1419971 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 2318336 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:59.280163+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f7475000/0x0/0x4ffc00000, data 0x25f2883/0x26d7000, compress 0x0/0x0/0x0, omap 0x32216, meta 0x607ddea), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 2187264 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:00.280331+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.600919724s of 10.001583099s, submitted: 23
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 2187264 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:01.280514+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124788736 unmapped: 2097152 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:02.280671+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123600896 unmapped: 3284992 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:03.280798+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1428087 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123625472 unmapped: 3260416 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:04.280931+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123633664 unmapped: 3252224 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:05.281065+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f742d000/0x0/0x4ffc00000, data 0x2637a16/0x271f000, compress 0x0/0x0/0x0, omap 0x326ef, meta 0x607d911), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124796928 unmapped: 2088960 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:06.281203+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124985344 unmapped: 1900544 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:07.281403+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124985344 unmapped: 1900544 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:08.281547+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1428321 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125157376 unmapped: 1728512 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:09.281733+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125157376 unmapped: 1728512 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:10.281911+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.224463463s of 10.038451195s, submitted: 48
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125157376 unmapped: 1728512 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:11.282334+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f73f7000/0x0/0x4ffc00000, data 0x266e336/0x2755000, compress 0x0/0x0/0x0, omap 0x32656, meta 0x607d9aa), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125157376 unmapped: 1728512 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:12.282602+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 1564672 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:13.282850+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430965 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 1564672 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:14.282988+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f73d4000/0x0/0x4ffc00000, data 0x268dc71/0x2776000, compress 0x0/0x0/0x0, omap 0x32ac5, meta 0x607d53b), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 1564672 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:15.283137+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 1515520 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:16.283273+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 1515520 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:17.283435+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 1515520 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:18.283615+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1433203 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 1507328 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:19.283773+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125566976 unmapped: 1318912 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:20.283938+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f73a6000/0x0/0x4ffc00000, data 0x26b8deb/0x27a4000, compress 0x0/0x0/0x0, omap 0x32f53, meta 0x607d0ad), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125509632 unmapped: 2424832 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:21.284127+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f7379000/0x0/0x4ffc00000, data 0x26e7b1a/0x27d3000, compress 0x0/0x0/0x0, omap 0x3383e, meta 0x607c7c2), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.544516563s of 10.881173134s, submitted: 74
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 2416640 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:22.284387+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 2416640 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:23.284621+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1440229 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125714432 unmapped: 2220032 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:24.284811+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f7379000/0x0/0x4ffc00000, data 0x26e7b49/0x27d2000, compress 0x0/0x0/0x0, omap 0x33962, meta 0x607c69e), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125714432 unmapped: 2220032 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:25.284995+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125657088 unmapped: 2277376 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:26.285203+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f7363000/0x0/0x4ffc00000, data 0x26fe25a/0x27e9000, compress 0x0/0x0/0x0, omap 0x33962, meta 0x607c69e), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 2686976 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:27.285400+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 2686976 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:28.285539+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445573 data_alloc: 218103808 data_used: 13249
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 1245184 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:29.285694+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 126959616 unmapped: 974848 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:30.285841+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 126959616 unmapped: 2023424 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:31.286003+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f7300000/0x0/0x4ffc00000, data 0x275f3af/0x284c000, compress 0x0/0x0/0x0, omap 0x3438e, meta 0x607bc72), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:32.286156+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 126959616 unmapped: 2023424 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.106158257s of 10.395409584s, submitted: 62
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f7300000/0x0/0x4ffc00000, data 0x275f3af/0x284c000, compress 0x0/0x0/0x0, omap 0x34469, meta 0x607bb97), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:33.286297+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 127303680 unmapped: 1679360 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451779 data_alloc: 218103808 data_used: 13249
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:34.286452+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 1597440 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:35.286622+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 127393792 unmapped: 1589248 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f72d8000/0x0/0x4ffc00000, data 0x27843c2/0x2872000, compress 0x0/0x0/0x0, omap 0x34ad7, meta 0x607b529), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:36.286756+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 127598592 unmapped: 1384448 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f7287000/0x0/0x4ffc00000, data 0x27d6e41/0x28c5000, compress 0x0/0x0/0x0, omap 0x34d1f, meta 0x607b2e1), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:37.286895+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 127598592 unmapped: 1384448 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:38.287046+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 127598592 unmapped: 1384448 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1453553 data_alloc: 218103808 data_used: 13249
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:39.287188+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 126926848 unmapped: 2056192 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:40.287313+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 126926848 unmapped: 2056192 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:41.287461+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 126926848 unmapped: 2056192 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f7252000/0x0/0x4ffc00000, data 0x280cd0c/0x28fa000, compress 0x0/0x0/0x0, omap 0x354d2, meta 0x607ab2e), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:42.287603+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 3014656 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.356106758s of 10.192037582s, submitted: 61
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:43.287730+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 129130496 unmapped: 1949696 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1463525 data_alloc: 218103808 data_used: 13249
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:44.287855+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 1892352 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:45.287988+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 1654784 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:46.288131+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 2940928 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f71e7000/0x0/0x4ffc00000, data 0x28750e2/0x2965000, compress 0x0/0x0/0x0, omap 0x35962, meta 0x607a69e), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:47.288273+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 128212992 unmapped: 2867200 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:48.288391+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 2859008 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1463183 data_alloc: 218103808 data_used: 13249
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:49.288517+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 2850816 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:50.288647+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 3620864 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:51.288784+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 2310144 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:52.288916+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f715e000/0x0/0x4ffc00000, data 0x28fbfc1/0x29ec000, compress 0x0/0x0/0x0, omap 0x3638f, meta 0x6079c71), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 2310144 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.403574944s of 10.003150940s, submitted: 83
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:53.289030+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 2195456 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471809 data_alloc: 218103808 data_used: 13249
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:54.289149+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 130277376 unmapped: 1851392 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:55.289293+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 1843200 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:56.289439+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 130293760 unmapped: 1835008 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f70fa000/0x0/0x4ffc00000, data 0x2962b7a/0x2a51000, compress 0x0/0x0/0x0, omap 0x36a67, meta 0x6079599), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:57.289588+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 130228224 unmapped: 1900544 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f70fa000/0x0/0x4ffc00000, data 0x2962b7a/0x2a51000, compress 0x0/0x0/0x0, omap 0x36b8b, meta 0x6079475), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:58.289747+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 2768896 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1480545 data_alloc: 218103808 data_used: 13249
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:59.289885+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 130457600 unmapped: 2719744 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:00.290027+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 129736704 unmapped: 3440640 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f70b5000/0x0/0x4ffc00000, data 0x29a479d/0x2a95000, compress 0x0/0x0/0x0, omap 0x376fb, meta 0x6078905), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:01.290183+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 3293184 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:02.290318+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 131153920 unmapped: 2023424 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.886514187s of 10.002050400s, submitted: 99
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Got map version 15
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:03.290467+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 1376256 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1487345 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:04.290610+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 2424832 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f7053000/0x0/0x4ffc00000, data 0x2a09072/0x2af9000, compress 0x0/0x0/0x0, omap 0x375d4, meta 0x6078a2c), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:05.290757+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 2301952 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f7019000/0x0/0x4ffc00000, data 0x2a41d3a/0x2b33000, compress 0x0/0x0/0x0, omap 0x376ac, meta 0x6078954), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:06.290883+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132055040 unmapped: 2170880 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f6fee000/0x0/0x4ffc00000, data 0x2a6b7f7/0x2b5e000, compress 0x0/0x0/0x0, omap 0x3785c, meta 0x60787a4), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:07.291136+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 2162688 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:08.291287+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 2154496 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 154 handle_osd_map epochs [156,156], i have 154, src has [1,156]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 154 handle_osd_map epochs [155,156], i have 154, src has [1,156]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1502319 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:09.291472+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 4030464 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f6fb0000/0x0/0x4ffc00000, data 0x2aa8336/0x2b9a000, compress 0x0/0x0/0x0, omap 0x38185, meta 0x6077e7b), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:10.291663+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 4030464 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:11.291840+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f6f72000/0x0/0x4ffc00000, data 0x2ae593d/0x2bd8000, compress 0x0/0x0/0x0, omap 0x38455, meta 0x6077bab), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132538368 unmapped: 3784704 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:12.292175+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132841472 unmapped: 3481600 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.291734695s of 10.002635002s, submitted: 159
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:13.292365+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132841472 unmapped: 3481600 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f6f74000/0x0/0x4ffc00000, data 0x2ae5a07/0x2bd8000, compress 0x0/0x0/0x0, omap 0x38575, meta 0x6077a8b), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 156 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1506503 data_alloc: 218103808 data_used: 13098
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:14.292658+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132841472 unmapped: 3481600 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f6f6f000/0x0/0x4ffc00000, data 0x2ae74e6/0x2bdb000, compress 0x0/0x0/0x0, omap 0x3896f, meta 0x6077691), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:15.292906+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132841472 unmapped: 3481600 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:16.293153+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134053888 unmapped: 2269184 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:17.293316+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134053888 unmapped: 2269184 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:18.293508+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134086656 unmapped: 2236416 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1513333 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:19.293713+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134086656 unmapped: 2236416 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:20.293897+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134086656 unmapped: 2236416 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f6f39000/0x0/0x4ffc00000, data 0x2b1c891/0x2c11000, compress 0x0/0x0/0x0, omap 0x38db7, meta 0x6077249), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:21.294126+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134127616 unmapped: 3244032 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:22.294270+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134127616 unmapped: 3244032 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.154506683s of 10.454253197s, submitted: 65
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:23.294422+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 158 handle_osd_map epochs [158,159], i have 159, src has [1,159]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 3481600 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1519225 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f6ef3000/0x0/0x4ffc00000, data 0x2b62258/0x2c58000, compress 0x0/0x0/0x0, omap 0x39087, meta 0x6076f79), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:24.294788+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 133824512 unmapped: 3547136 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:25.294948+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 133824512 unmapped: 3547136 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:26.295128+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f6ec8000/0x0/0x4ffc00000, data 0x2b8ada5/0x2c82000, compress 0x0/0x0/0x0, omap 0x3956c, meta 0x6076a94), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 133832704 unmapped: 3538944 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:27.295275+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 133832704 unmapped: 3538944 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:28.295529+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134881280 unmapped: 2490368 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1531411 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:29.295845+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 2523136 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:30.296011+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134930432 unmapped: 2441216 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 160 ms_handle_reset con 0x55944a48a000 session 0x55944c4d4540
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f6e7c000/0x0/0x4ffc00000, data 0x2bd6838/0x2cd0000, compress 0x0/0x0/0x0, omap 0x39c3f, meta 0x60763c1), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:31.296229+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135446528 unmapped: 2973696 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 160 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6e3b000/0x0/0x4ffc00000, data 0x2c19312/0x2d11000, compress 0x0/0x0/0x0, omap 0x39ccf, meta 0x6076331), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:32.296396+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135454720 unmapped: 2965504 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Got map version 16
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.442367554s of 10.008372307s, submitted: 362
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:33.296636+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 2695168 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537411 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:34.296818+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 2695168 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:35.296962+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 2695168 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:36.297147+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135749632 unmapped: 2670592 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:37.297354+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135749632 unmapped: 2670592 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f6dee000/0x0/0x4ffc00000, data 0x2c643f0/0x2d5e000, compress 0x0/0x0/0x0, omap 0x3a901, meta 0x60756ff), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:38.297540+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135749632 unmapped: 2670592 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1536101 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:39.297748+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135749632 unmapped: 2670592 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:40.298031+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135749632 unmapped: 2670592 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:41.298300+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136806400 unmapped: 1613824 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:42.298625+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6dca000/0x0/0x4ffc00000, data 0x2c84a19/0x2d80000, compress 0x0/0x0/0x0, omap 0x3ac2e, meta 0x60753d2), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:43.298794+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541275 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:44.298959+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:45.299179+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6dca000/0x0/0x4ffc00000, data 0x2c84a19/0x2d80000, compress 0x0/0x0/0x0, omap 0x3ac2e, meta 0x60753d2), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:46.299599+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:47.299770+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:48.299912+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541275 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6dca000/0x0/0x4ffc00000, data 0x2c84a19/0x2d80000, compress 0x0/0x0/0x0, omap 0x3ac2e, meta 0x60753d2), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:49.300093+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:50.300283+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6dca000/0x0/0x4ffc00000, data 0x2c84a19/0x2d80000, compress 0x0/0x0/0x0, omap 0x3ac2e, meta 0x60753d2), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:51.300495+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6dca000/0x0/0x4ffc00000, data 0x2c84a19/0x2d80000, compress 0x0/0x0/0x0, omap 0x3ac2e, meta 0x60753d2), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:52.300699+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:53.300841+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541275 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:54.301035+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6dca000/0x0/0x4ffc00000, data 0x2c84a19/0x2d80000, compress 0x0/0x0/0x0, omap 0x3ac2e, meta 0x60753d2), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:55.301233+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:56.301430+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:57.301657+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 1703936 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:58.301803+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 1703936 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541275 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:59.301958+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 1703936 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:00.302541+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 1703936 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6dca000/0x0/0x4ffc00000, data 0x2c84a19/0x2d80000, compress 0x0/0x0/0x0, omap 0x3ac2e, meta 0x60753d2), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:01.302890+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 1703936 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:02.303060+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 1703936 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:03.303222+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 1703936 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.615039825s of 30.784770966s, submitted: 45
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541411 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:04.303411+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135528448 unmapped: 2891776 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:05.303652+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135528448 unmapped: 2891776 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:06.303794+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135528448 unmapped: 3940352 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6daa000/0x0/0x4ffc00000, data 0x2ca69f5/0x2da2000, compress 0x0/0x0/0x0, omap 0x3ae6e, meta 0x6075192), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:07.303921+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 3874816 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:08.304074+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 3874816 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:09.304229+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541711 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135626752 unmapped: 3842048 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:10.304389+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135626752 unmapped: 3842048 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:11.304634+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6d99000/0x0/0x4ffc00000, data 0x2cb8492/0x2db3000, compress 0x0/0x0/0x0, omap 0x3aefe, meta 0x6075102), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135626752 unmapped: 3842048 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:12.304869+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6d7b000/0x0/0x4ffc00000, data 0x2cd5957/0x2dd1000, compress 0x0/0x0/0x0, omap 0x3aefe, meta 0x6075102), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135626752 unmapped: 3842048 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:13.305017+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135659520 unmapped: 3809280 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.969015121s of 10.246096611s, submitted: 26
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:14.305254+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1548623 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135692288 unmapped: 3776512 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6d49000/0x0/0x4ffc00000, data 0x2d06bc6/0x2e03000, compress 0x0/0x0/0x0, omap 0x3b13e, meta 0x6074ec2), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:15.305465+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135790592 unmapped: 3678208 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6d49000/0x0/0x4ffc00000, data 0x2d06c2b/0x2e03000, compress 0x0/0x0/0x0, omap 0x3b1ce, meta 0x6074e32), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:16.305675+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135938048 unmapped: 3530752 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:17.305790+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135938048 unmapped: 3530752 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:18.305923+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135946240 unmapped: 3522560 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:19.306155+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1545941 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135946240 unmapped: 3522560 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6d1c000/0x0/0x4ffc00000, data 0x2d34e2b/0x2e30000, compress 0x0/0x0/0x0, omap 0x3b576, meta 0x6074a8a), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:20.306320+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6d1c000/0x0/0x4ffc00000, data 0x2d34e2b/0x2e30000, compress 0x0/0x0/0x0, omap 0x3b576, meta 0x6074a8a), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 3653632 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:21.306491+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135954432 unmapped: 3514368 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:22.306653+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135954432 unmapped: 3514368 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:23.306834+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135954432 unmapped: 3514368 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:24.307074+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551863 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135954432 unmapped: 3514368 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:25.307220+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135954432 unmapped: 3514368 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f6cf4000/0x0/0x4ffc00000, data 0x2d59e53/0x2e56000, compress 0x0/0x0/0x0, omap 0x3b937, meta 0x60746c9), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.918767929s of 11.992917061s, submitted: 58
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:26.307412+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136118272 unmapped: 3350528 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:27.307686+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136118272 unmapped: 3350528 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:28.307902+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136118272 unmapped: 3350528 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:29.308108+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1552883 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136118272 unmapped: 3350528 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f6ce5000/0x0/0x4ffc00000, data 0x2d694af/0x2e65000, compress 0x0/0x0/0x0, omap 0x3b937, meta 0x60746c9), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,3])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:30.308334+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136118272 unmapped: 3350528 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:31.308606+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f6ce5000/0x0/0x4ffc00000, data 0x2d694af/0x2e65000, compress 0x0/0x0/0x0, omap 0x3b937, meta 0x60746c9), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,3])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136118272 unmapped: 3350528 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:32.308781+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136118272 unmapped: 3350528 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:33.309046+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.572957993s, txc = 0x55944b372c00, txc bytes = 2273, txc ios = 1, txc cost = 672273, txc onodes = 1, DB updates = 7, DB bytes = 6277, cost max = 95489052 on 2026-01-23T18:35:28.353611+0000, txc max = 104 on 2026-01-23T19:04:00.405895+0000
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 7.572911263s
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 7.572911263s
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.627645016s, txc = 0x55944b2c1b00, txc bytes = 25069, txc ios = 1, txc cost = 695069, txc onodes = 1, DB updates = 7, DB bytes = 30114, cost max = 95489052 on 2026-01-23T18:35:28.353611+0000, txc max = 104 on 2026-01-23T19:04:00.405895+0000
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.627498150s, txc = 0x55944c30c300, txc bytes = 40207, txc ios = 1, txc cost = 710207, txc onodes = 1, DB updates = 7, DB bytes = 42454, cost max = 95489052 on 2026-01-23T18:35:28.353611+0000, txc max = 104 on 2026-01-23T19:04:00.405895+0000
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136232960 unmapped: 3235840 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:34.309251+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551287 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136232960 unmapped: 3235840 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:35.309445+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136232960 unmapped: 3235840 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:36.310289+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136232960 unmapped: 3235840 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:37.310495+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f6cdb000/0x0/0x4ffc00000, data 0x2d74c99/0x2e71000, compress 0x0/0x0/0x0, omap 0x3b937, meta 0x60746c9), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136232960 unmapped: 3235840 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:38.310705+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.898449898s of 12.557981491s, submitted: 4
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 3227648 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:39.310876+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 3227648 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:40.311020+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 3227648 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:41.311249+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 3227648 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:42.311396+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 3227648 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:43.311633+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 3227648 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:44.311788+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 3227648 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:45.311915+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 3227648 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:46.312187+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:47.312378+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:48.312533+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:49.312689+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:50.312868+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:51.313095+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:52.313225+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:53.313365+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:54.313486+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:55.313719+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:56.313864+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:57.313995+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:58.314148+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:59.314272+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:00.314412+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:01.314603+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:02.314748+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:03.314944+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:04.315077+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:05.315309+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:06.315537+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:07.315779+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:08.315986+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:09.316143+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:10.316331+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:11.316531+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:12.316669+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:13.316802+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:14.316978+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:15.317111+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:16.317247+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:17.317376+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:18.317631+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 3203072 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:19.317752+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 3203072 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:20.317892+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:21.318071+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:22.318218+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:23.318350+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:24.318471+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:25.318642+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:26.318749+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:27.318926+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:28.319195+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:29.319389+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 51.413509369s of 51.424388885s, submitted: 13
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:30.319583+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 ms_handle_reset con 0x55944d947c00 session 0x55944d5c0380
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136839168 unmapped: 2629632 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:31.319731+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136839168 unmapped: 2629632 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:32.319862+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Got map version 17
Jan 23 19:11:14 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136863744 unmapped: 2605056 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:33.320025+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136863744 unmapped: 2605056 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:34.320142+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136863744 unmapped: 2605056 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:35.320265+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136863744 unmapped: 2605056 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:36.320379+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136863744 unmapped: 2605056 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:37.320513+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136863744 unmapped: 2605056 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:38.320620+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136863744 unmapped: 2605056 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:39.320811+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:14 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:14 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136863744 unmapped: 2605056 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:40.320941+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136937472 unmapped: 2531328 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:41.321088+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: do_command 'config diff' '{prefix=config diff}'
Jan 23 19:11:14 compute-0 ceph-osd[87009]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 23 19:11:14 compute-0 ceph-osd[87009]: do_command 'config show' '{prefix=config show}'
Jan 23 19:11:14 compute-0 ceph-osd[87009]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 23 19:11:14 compute-0 ceph-osd[87009]: do_command 'counter dump' '{prefix=counter dump}'
Jan 23 19:11:14 compute-0 ceph-osd[87009]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 23 19:11:14 compute-0 ceph-osd[87009]: do_command 'counter schema' '{prefix=counter schema}'
Jan 23 19:11:14 compute-0 ceph-osd[87009]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137248768 unmapped: 3268608 heap: 140517376 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:42.321209+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 3186688 heap: 140517376 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:14 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:11:14 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:43.321358+0000)
Jan 23 19:11:14 compute-0 ceph-osd[87009]: do_command 'log dump' '{prefix=log dump}'
Jan 23 19:11:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:11:14 compute-0 nova_compute[240119]: 2026-01-23 19:11:14.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:11:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 23 19:11:14 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/263873625' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 23 19:11:14 compute-0 rsyslogd[1007]: imjournal from <np0005593912:ceph-osd>: begin to drop messages due to rate-limiting
Jan 23 19:11:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 23 19:11:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 23 19:11:14 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/394676754' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 23 19:11:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 23 19:11:14 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3998185724' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 23 19:11:15 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/263873625' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 23 19:11:15 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/394676754' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 23 19:11:15 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3998185724' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 23 19:11:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 23 19:11:15 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4264352747' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 23 19:11:15 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 19:11:15 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 19:11:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 23 19:11:15 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1516735490' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 23 19:11:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 23 19:11:15 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3133018666' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 23 19:11:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 23 19:11:15 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/696546018' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 23 19:11:16 compute-0 ceph-mon[75097]: pgmap v1258: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 23 19:11:16 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4264352747' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 23 19:11:16 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1516735490' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 23 19:11:16 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3133018666' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 23 19:11:16 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/696546018' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 23 19:11:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 23 19:11:16 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2529369344' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 23 19:11:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 23 19:11:16 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1071853593' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 23 19:11:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 23 19:11:16 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1262564913' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 23 19:11:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 23 19:11:16 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1121711021' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 23 19:11:17 compute-0 podman[256381]: 2026-01-23 19:11:17.024535083 +0000 UTC m=+0.094902489 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 19:11:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 23 19:11:17 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2319509396' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 23 19:11:17 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2529369344' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 23 19:11:17 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1071853593' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 23 19:11:17 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1262564913' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 23 19:11:17 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1121711021' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 23 19:11:17 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2319509396' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 23 19:11:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 23 19:11:17 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1207977640' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 23 19:11:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 23 19:11:17 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914636503' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 23 19:11:17 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14624 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:18 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14626 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:18 compute-0 ceph-mon[75097]: pgmap v1259: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:18 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1207977640' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 23 19:11:18 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2914636503' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 23 19:11:18 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14628 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871452 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:07.174758+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:36.232266+0000 osd.0 (osd.0) 112 : cluster [DBG] 2.13 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:36.243084+0000 osd.0 (osd.0) 113 : cluster [DBG] 2.13 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.280504227s of 10.305309296s, submitted: 12
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 917504 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 113)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:36.232266+0000 osd.0 (osd.0) 112 : cluster [DBG] 2.13 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:36.243084+0000 osd.0 (osd.0) 113 : cluster [DBG] 2.13 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:08.175458+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:37.200936+0000 osd.0 (osd.0) 114 : cluster [DBG] 3.12 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:37.211471+0000 osd.0 (osd.0) 115 : cluster [DBG] 3.12 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 901120 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 115)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:37.200936+0000 osd.0 (osd.0) 114 : cluster [DBG] 3.12 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:37.211471+0000 osd.0 (osd.0) 115 : cluster [DBG] 3.12 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:09.175843+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 901120 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:10.175974+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 892928 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:11.176186+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 892928 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876278 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:12.176379+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:41.213662+0000 osd.0 (osd.0) 116 : cluster [DBG] 5.15 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:41.224251+0000 osd.0 (osd.0) 117 : cluster [DBG] 5.15 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 892928 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 117)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:41.213662+0000 osd.0 (osd.0) 116 : cluster [DBG] 5.15 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:41.224251+0000 osd.0 (osd.0) 117 : cluster [DBG] 5.15 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:13.176686+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 884736 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:14.176874+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 884736 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:15.177157+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:44.207145+0000 osd.0 (osd.0) 118 : cluster [DBG] 8.1a scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:44.217753+0000 osd.0 (osd.0) 119 : cluster [DBG] 8.1a scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 843776 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 119)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:44.207145+0000 osd.0 (osd.0) 118 : cluster [DBG] 8.1a scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:44.217753+0000 osd.0 (osd.0) 119 : cluster [DBG] 8.1a scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:16.177464+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:45.197903+0000 osd.0 (osd.0) 120 : cluster [DBG] 5.14 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:45.208497+0000 osd.0 (osd.0) 121 : cluster [DBG] 5.14 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 819200 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 883517 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 121)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:45.197903+0000 osd.0 (osd.0) 120 : cluster [DBG] 5.14 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:45.208497+0000 osd.0 (osd.0) 121 : cluster [DBG] 5.14 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:17.177764+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:46.223830+0000 osd.0 (osd.0) 122 : cluster [DBG] 2.11 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:46.234496+0000 osd.0 (osd.0) 123 : cluster [DBG] 2.11 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.944892883s of 10.002225876s, submitted: 10
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 802816 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 123)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:46.223830+0000 osd.0 (osd.0) 122 : cluster [DBG] 2.11 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:46.234496+0000 osd.0 (osd.0) 123 : cluster [DBG] 2.11 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:18.177991+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:47.203137+0000 osd.0 (osd.0) 124 : cluster [DBG] 10.7 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:47.213668+0000 osd.0 (osd.0) 125 : cluster [DBG] 10.7 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 794624 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 125)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:47.203137+0000 osd.0 (osd.0) 124 : cluster [DBG] 10.7 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:47.213668+0000 osd.0 (osd.0) 125 : cluster [DBG] 10.7 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:19.178331+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 778240 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:20.178533+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 770048 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:21.178719+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 770048 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885930 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:22.178840+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 761856 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:23.179078+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:52.354930+0000 osd.0 (osd.0) 126 : cluster [DBG] 7.1b scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:52.365479+0000 osd.0 (osd.0) 127 : cluster [DBG] 7.1b scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 127)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:52.354930+0000 osd.0 (osd.0) 126 : cluster [DBG] 7.1b scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:52.365479+0000 osd.0 (osd.0) 127 : cluster [DBG] 7.1b scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 761856 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:24.179354+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 761856 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:25.179591+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:54.367291+0000 osd.0 (osd.0) 128 : cluster [DBG] 8.14 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:54.377857+0000 osd.0 (osd.0) 129 : cluster [DBG] 8.14 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 129)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:54.367291+0000 osd.0 (osd.0) 128 : cluster [DBG] 8.14 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:54.377857+0000 osd.0 (osd.0) 129 : cluster [DBG] 8.14 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 753664 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:26.179915+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 753664 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890756 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:27.180053+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.169645309s of 10.190286636s, submitted: 6
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 745472 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:28.180208+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:57.393384+0000 osd.0 (osd.0) 130 : cluster [DBG] 10.16 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:57.403857+0000 osd.0 (osd.0) 131 : cluster [DBG] 10.16 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 131)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:57.393384+0000 osd.0 (osd.0) 130 : cluster [DBG] 10.16 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:57.403857+0000 osd.0 (osd.0) 131 : cluster [DBG] 10.16 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 745472 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:29.180449+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:30.180885+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:59.431023+0000 osd.0 (osd.0) 132 : cluster [DBG] 8.18 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:39:59.441621+0000 osd.0 (osd.0) 133 : cluster [DBG] 8.18 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 729088 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:31.181189+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 720896 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 133)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:59.431023+0000 osd.0 (osd.0) 132 : cluster [DBG] 8.18 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:39:59.441621+0000 osd.0 (osd.0) 133 : cluster [DBG] 8.18 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:32.181315+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:01.477810+0000 osd.0 (osd.0) 134 : cluster [DBG] 10.9 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:01.491961+0000 osd.0 (osd.0) 135 : cluster [DBG] 10.9 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897997 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 720896 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 135)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:01.477810+0000 osd.0 (osd.0) 134 : cluster [DBG] 10.9 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:01.491961+0000 osd.0 (osd.0) 135 : cluster [DBG] 10.9 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:33.181642+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 720896 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:34.181984+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 720896 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:35.182273+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:36.182459+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:05.387192+0000 osd.0 (osd.0) 136 : cluster [DBG] 10.15 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:05.401268+0000 osd.0 (osd.0) 137 : cluster [DBG] 10.15 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 137)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:05.387192+0000 osd.0 (osd.0) 136 : cluster [DBG] 10.15 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:05.401268+0000 osd.0 (osd.0) 137 : cluster [DBG] 10.15 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:37.182856+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 900412 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.485198021s of 10.010319710s, submitted: 9
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:38.183023+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:07.389758+0000 osd.0 (osd.0) 138 : cluster [DBG] 8.6 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:07.403871+0000 osd.0 (osd.0) 139 : cluster [DBG] 8.6 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 139)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:07.389758+0000 osd.0 (osd.0) 138 : cluster [DBG] 8.6 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:07.403871+0000 osd.0 (osd.0) 139 : cluster [DBG] 8.6 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:39.183309+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:40.183456+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 647168 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:41.183668+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 647168 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:42.183812+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:11.486540+0000 osd.0 (osd.0) 140 : cluster [DBG] 8.f scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:11.504347+0000 osd.0 (osd.0) 141 : cluster [DBG] 8.f scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905234 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 638976 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 141)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:11.486540+0000 osd.0 (osd.0) 140 : cluster [DBG] 8.f scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:11.504347+0000 osd.0 (osd.0) 141 : cluster [DBG] 8.f scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:43.184122+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 638976 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:44.184335+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:13.473079+0000 osd.0 (osd.0) 142 : cluster [DBG] 10.d scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:13.487166+0000 osd.0 (osd.0) 143 : cluster [DBG] 10.d scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 630784 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:45.184590+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 143)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:13.473079+0000 osd.0 (osd.0) 142 : cluster [DBG] 10.d scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:13.487166+0000 osd.0 (osd.0) 143 : cluster [DBG] 10.d scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 622592 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:46.184790+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:15.468252+0000 osd.0 (osd.0) 144 : cluster [DBG] 10.e scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:15.482379+0000 osd.0 (osd.0) 145 : cluster [DBG] 10.e scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 145)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:15.468252+0000 osd.0 (osd.0) 144 : cluster [DBG] 10.e scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:15.482379+0000 osd.0 (osd.0) 145 : cluster [DBG] 10.e scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 614400 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:47.185084+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910060 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 614400 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:48.185258+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.756112099s of 11.056242943s, submitted: 7
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 614400 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:49.185387+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:18.460070+0000 osd.0 (osd.0) 146 : cluster [DBG] 8.1d scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:18.474266+0000 osd.0 (osd.0) 147 : cluster [DBG] 8.1d scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 147)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:18.460070+0000 osd.0 (osd.0) 146 : cluster [DBG] 8.1d scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:18.474266+0000 osd.0 (osd.0) 147 : cluster [DBG] 8.1d scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:50.185642+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:19.459901+0000 osd.0 (osd.0) 148 : cluster [DBG] 6.a scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:19.470482+0000 osd.0 (osd.0) 149 : cluster [DBG] 6.a scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 149)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:19.459901+0000 osd.0 (osd.0) 148 : cluster [DBG] 6.a scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:19.470482+0000 osd.0 (osd.0) 149 : cluster [DBG] 6.a scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:51.185842+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:52.186020+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:21.471409+0000 osd.0 (osd.0) 150 : cluster [DBG] 6.5 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:21.489068+0000 osd.0 (osd.0) 151 : cluster [DBG] 6.5 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917295 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 565248 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:53.186369+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 4 last_log 153 sent 151 num 4 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:22.453177+0000 osd.0 (osd.0) 152 : cluster [DBG] 6.9 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:22.463626+0000 osd.0 (osd.0) 153 : cluster [DBG] 6.9 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 540672 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:54.186644+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 151)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:21.471409+0000 osd.0 (osd.0) 150 : cluster [DBG] 6.5 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:21.489068+0000 osd.0 (osd.0) 151 : cluster [DBG] 6.5 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 153)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:22.453177+0000 osd.0 (osd.0) 152 : cluster [DBG] 6.9 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:22.463626+0000 osd.0 (osd.0) 153 : cluster [DBG] 6.9 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 532480 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:55.186880+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:56.187048+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:25.469875+0000 osd.0 (osd.0) 154 : cluster [DBG] 6.7 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:25.484011+0000 osd.0 (osd.0) 155 : cluster [DBG] 6.7 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 155)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:25.469875+0000 osd.0 (osd.0) 154 : cluster [DBG] 6.7 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:25.484011+0000 osd.0 (osd.0) 155 : cluster [DBG] 6.7 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 507904 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:57.187390+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:26.478885+0000 osd.0 (osd.0) 156 : cluster [DBG] 6.3 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:26.496574+0000 osd.0 (osd.0) 157 : cluster [DBG] 6.3 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924528 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 157)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:26.478885+0000 osd.0 (osd.0) 156 : cluster [DBG] 6.3 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:26.496574+0000 osd.0 (osd.0) 157 : cluster [DBG] 6.3 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 507904 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:58.187662+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.977183342s of 10.004480362s, submitted: 13
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 499712 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:39:59.187804+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:28.439834+0000 osd.0 (osd.0) 158 : cluster [DBG] 6.0 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:28.464588+0000 osd.0 (osd.0) 159 : cluster [DBG] 6.0 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 159)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:28.439834+0000 osd.0 (osd.0) 158 : cluster [DBG] 6.0 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:28.464588+0000 osd.0 (osd.0) 159 : cluster [DBG] 6.0 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:00.188011+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:01.188138+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:30.435818+0000 osd.0 (osd.0) 160 : cluster [DBG] 11.f scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:30.446381+0000 osd.0 (osd.0) 161 : cluster [DBG] 11.f scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 161)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:30.435818+0000 osd.0 (osd.0) 160 : cluster [DBG] 11.f scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:30.446381+0000 osd.0 (osd.0) 161 : cluster [DBG] 11.f scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 475136 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:02.188356+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929352 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 475136 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:03.188478+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:32.444191+0000 osd.0 (osd.0) 162 : cluster [DBG] 11.1 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:32.454712+0000 osd.0 (osd.0) 163 : cluster [DBG] 11.1 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 163)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:32.444191+0000 osd.0 (osd.0) 162 : cluster [DBG] 11.1 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:32.454712+0000 osd.0 (osd.0) 163 : cluster [DBG] 11.1 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:04.188710+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:05.188941+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:06.189115+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:07.189200+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931765 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.e scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.e scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 450560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:08.189366+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:37.425128+0000 osd.0 (osd.0) 164 : cluster [DBG] 11.e scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:37.435032+0000 osd.0 (osd.0) 165 : cluster [DBG] 11.e scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 165)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:37.425128+0000 osd.0 (osd.0) 164 : cluster [DBG] 11.e scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:37.435032+0000 osd.0 (osd.0) 165 : cluster [DBG] 11.e scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 434176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:09.189636+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 434176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:10.189745+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:11.189894+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:12.190029+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934178 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:13.190277+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 417792 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:14.190446+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 409600 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:15.190704+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 409600 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:16.190868+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 401408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:17.191040+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934178 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 393216 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:18.191210+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.870124817s of 19.884685516s, submitted: 7
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:19.191347+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:48.349233+0000 osd.0 (osd.0) 166 : cluster [DBG] 11.19 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:48.359754+0000 osd.0 (osd.0) 167 : cluster [DBG] 11.19 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 167)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:48.349233+0000 osd.0 (osd.0) 166 : cluster [DBG] 11.19 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:48.359754+0000 osd.0 (osd.0) 167 : cluster [DBG] 11.19 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:20.191635+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:21.191763+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:50.422405+0000 osd.0 (osd.0) 168 : cluster [DBG] 11.4 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:50.433006+0000 osd.0 (osd.0) 169 : cluster [DBG] 11.4 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 169)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:50.422405+0000 osd.0 (osd.0) 168 : cluster [DBG] 11.4 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:50.433006+0000 osd.0 (osd.0) 169 : cluster [DBG] 11.4 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:22.191931+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:51.460628+0000 osd.0 (osd.0) 170 : cluster [DBG] 11.14 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:51.471196+0000 osd.0 (osd.0) 171 : cluster [DBG] 11.14 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941421 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 319488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 171)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:51.460628+0000 osd.0 (osd.0) 170 : cluster [DBG] 11.14 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:51.471196+0000 osd.0 (osd.0) 171 : cluster [DBG] 11.14 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:23.192110+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 319488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:24.192300+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:25.192513+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:26.192717+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73007104 unmapped: 303104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:27.192838+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:56.373975+0000 osd.0 (osd.0) 172 : cluster [DBG] 11.10 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:40:56.384479+0000 osd.0 (osd.0) 173 : cluster [DBG] 11.10 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943836 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73007104 unmapped: 303104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 173)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:56.373975+0000 osd.0 (osd.0) 172 : cluster [DBG] 11.10 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:40:56.384479+0000 osd.0 (osd.0) 173 : cluster [DBG] 11.10 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:28.193052+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:29.193242+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:30.193412+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:31.193596+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:32.193779+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943836 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:33.193946+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.135893822s of 15.189969063s, submitted: 8
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 262144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:34.194080+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:03.539257+0000 osd.0 (osd.0) 174 : cluster [DBG] 11.17 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:03.549808+0000 osd.0 (osd.0) 175 : cluster [DBG] 11.17 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 175)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:03.539257+0000 osd.0 (osd.0) 174 : cluster [DBG] 11.17 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:03.549808+0000 osd.0 (osd.0) 175 : cluster [DBG] 11.17 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 262144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:35.194284+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 262144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:36.194410+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 253952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:37.194676+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946251 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 253952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:38.194827+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:39.194945+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:08.518972+0000 osd.0 (osd.0) 176 : cluster [DBG] 11.6 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:08.528537+0000 osd.0 (osd.0) 177 : cluster [DBG] 11.6 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 177)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:08.518972+0000 osd.0 (osd.0) 176 : cluster [DBG] 11.6 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:08.528537+0000 osd.0 (osd.0) 177 : cluster [DBG] 11.6 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 221184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:40.195134+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:41.195268+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:42.195404+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948664 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:43.195587+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:12.527501+0000 osd.0 (osd.0) 178 : cluster [DBG] 9.11 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:12.562853+0000 osd.0 (osd.0) 179 : cluster [DBG] 9.11 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 179)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:12.527501+0000 osd.0 (osd.0) 178 : cluster [DBG] 9.11 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:12.562853+0000 osd.0 (osd.0) 179 : cluster [DBG] 9.11 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:44.195872+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:13.480100+0000 osd.0 (osd.0) 180 : cluster [DBG] 9.5 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:13.522648+0000 osd.0 (osd.0) 181 : cluster [DBG] 9.5 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.958405495s of 10.989315033s, submitted: 8
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 181)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:13.480100+0000 osd.0 (osd.0) 180 : cluster [DBG] 9.5 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:13.522648+0000 osd.0 (osd.0) 181 : cluster [DBG] 9.5 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 196608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:45.196132+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:14.528592+0000 osd.0 (osd.0) 182 : cluster [DBG] 9.16 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:14.552371+0000 osd.0 (osd.0) 183 : cluster [DBG] 9.16 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 183)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:14.528592+0000 osd.0 (osd.0) 182 : cluster [DBG] 9.16 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:14.552371+0000 osd.0 (osd.0) 183 : cluster [DBG] 9.16 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 196608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:46.196331+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:47.196540+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955901 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:48.196793+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:49.196988+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 163840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:50.197179+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 155648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:51.197354+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:20.395899+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.b scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:20.420615+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.b scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 139264 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:52.197588+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 4 last_log 187 sent 185 num 4 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:21.408057+0000 osd.0 (osd.0) 186 : cluster [DBG] 9.9 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:21.435827+0000 osd.0 (osd.0) 187 : cluster [DBG] 9.9 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 185)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:20.395899+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.b scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:20.420615+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.b scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 187)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:21.408057+0000 osd.0 (osd.0) 186 : cluster [DBG] 9.9 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:21.435827+0000 osd.0 (osd.0) 187 : cluster [DBG] 9.9 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960723 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 139264 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:53.197816+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 131072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:54.198073+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 122880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:55.198447+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.442311287s of 10.793945312s, submitted: 6
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 106496 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:56.198655+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:25.322417+0000 osd.0 (osd.0) 188 : cluster [DBG] 9.d scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:25.360337+0000 osd.0 (osd.0) 189 : cluster [DBG] 9.d scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 189)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:25.322417+0000 osd.0 (osd.0) 188 : cluster [DBG] 9.d scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:25.360337+0000 osd.0 (osd.0) 189 : cluster [DBG] 9.d scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 106496 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:57.198818+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963134 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 106496 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:58.199022+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:40:59.199203+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 98304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:00.199363+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:29.293072+0000 osd.0 (osd.0) 190 : cluster [DBG] 9.1 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:29.335399+0000 osd.0 (osd.0) 191 : cluster [DBG] 9.1 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 73728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 191)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:29.293072+0000 osd.0 (osd.0) 190 : cluster [DBG] 9.1 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:29.335399+0000 osd.0 (osd.0) 191 : cluster [DBG] 9.1 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:01.199647+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 65536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:02.199857+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 65536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965545 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:03.200074+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:04.200285+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:05.200631+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:34.378538+0000 osd.0 (osd.0) 192 : cluster [DBG] 9.3 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:34.417372+0000 osd.0 (osd.0) 193 : cluster [DBG] 9.3 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 32768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.014067650s of 10.084829330s, submitted: 6
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 193)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:34.378538+0000 osd.0 (osd.0) 192 : cluster [DBG] 9.3 scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:34.417372+0000 osd.0 (osd.0) 193 : cluster [DBG] 9.3 scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:06.201103+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:35.407419+0000 osd.0 (osd.0) 194 : cluster [DBG] 9.1d scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:35.435629+0000 osd.0 (osd.0) 195 : cluster [DBG] 9.1d scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 32768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 195)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:35.407419+0000 osd.0 (osd.0) 194 : cluster [DBG] 9.1d scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:35.435629+0000 osd.0 (osd.0) 195 : cluster [DBG] 9.1d scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:07.201464+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:36.375888+0000 osd.0 (osd.0) 196 : cluster [DBG] 9.1c scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:36.418297+0000 osd.0 (osd.0) 197 : cluster [DBG] 9.1c scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 16384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975195 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 197)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:36.375888+0000 osd.0 (osd.0) 196 : cluster [DBG] 9.1c scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:36.418297+0000 osd.0 (osd.0) 197 : cluster [DBG] 9.1c scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:08.201806+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:37.381034+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.1e scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:37.412814+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.1e scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 16384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 199)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:37.381034+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.1e scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:37.412814+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.1e scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:09.202032+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:10.202215+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:11.202367+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 0 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:12.202633+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 0 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975195 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:13.202781+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 0 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:14.203025+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1040384 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:15.203209+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:44.387065+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.1b scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  will send 2026-01-23T18:41:44.408348+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.1b scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client handle_log_ack log(last 201)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:44.387065+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.1b scrub starts
Jan 23 19:11:18 compute-0 ceph-osd[85953]: log_client  logged 2026-01-23T18:41:44.408348+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.1b scrub ok
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:16.203403+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:17.203631+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:18.203860+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:19.204043+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:20.204190+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:21.204400+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 1007616 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:22.204844+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 1007616 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:23.204998+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:24.205159+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:25.205359+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 1007616 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:26.205652+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:27.205791+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:28.205988+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:29.206130+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:30.206310+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:31.206465+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:32.206622+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:33.206787+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:34.207023+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:35.207328+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:36.207621+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:37.207799+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:38.207982+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:39.208163+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:40.208323+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:41.208513+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:42.208628+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:43.235376+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:44.235513+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:45.235667+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 917504 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:46.235796+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 917504 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:47.235921+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:48.236038+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:49.236183+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:50.236325+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 892928 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:51.236507+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 892928 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:52.236640+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:53.236761+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:54.236957+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:55.237113+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:56.237266+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:57.237397+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:58.237549+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:41:59.237740+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:00.237926+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:01.238103+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:02.238263+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:03.238460+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 827392 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:04.238672+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 827392 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:05.238869+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 827392 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:06.239014+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:07.239139+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:08.240033+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 811008 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:09.240205+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 811008 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:10.240345+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 811008 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:11.240510+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:12.240684+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:13.240786+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:14.240934+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:15.241102+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 786432 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:16.241229+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:17.241401+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:18.241532+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 770048 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:19.241665+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 770048 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:20.241816+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:21.241989+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:22.242195+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:23.242387+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:24.242630+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:25.242830+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:26.243029+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:27.243183+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:28.243340+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:29.243515+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:30.243696+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:31.243823+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:32.243928+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:33.244044+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 704512 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:34.244215+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 704512 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:35.244419+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:36.244632+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:37.244759+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:38.244966+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:39.245100+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 704512 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:40.245254+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:41.245377+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 679936 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:42.245510+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 679936 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:43.245621+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:44.245754+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:45.246083+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:46.246197+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:47.246351+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:48.246502+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:49.246649+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:50.246773+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:51.246939+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:52.247099+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:53.247273+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:54.247425+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:55.247681+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:56.247840+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 630784 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:57.247966+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 630784 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:58.248352+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 630784 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:42:59.248535+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 606208 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:00.248715+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 606208 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:01.248859+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 581632 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:02.248998+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 581632 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:03.249155+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 581632 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:04.249339+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 573440 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:05.249762+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 573440 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:06.249879+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 573440 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:07.250002+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 565248 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:08.250162+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 565248 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:09.250296+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 540672 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:10.250425+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 540672 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:11.250532+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 540672 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:12.250632+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 540672 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:13.250788+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 532480 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:14.250961+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 532480 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:15.251141+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 532480 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:16.251268+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 524288 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:17.251381+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 524288 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:18.251518+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 516096 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:19.251615+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 499712 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:20.251787+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 499712 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:21.251918+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 491520 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:22.252101+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 491520 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:23.252256+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 483328 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:24.252399+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 483328 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:25.252547+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 475136 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:26.252704+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 475136 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:27.252870+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 475136 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:28.253036+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:29.253195+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:30.253352+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 458752 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:31.253504+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 458752 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:32.253631+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 458752 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:33.253743+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 450560 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:34.253942+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 442368 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:35.254149+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:36.254336+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:37.254598+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:38.254785+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 425984 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:39.255000+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 425984 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:40.255270+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 425984 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:41.255511+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 417792 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:42.255778+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 417792 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:43.255939+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 409600 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:44.256075+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 409600 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:45.256266+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:46.256390+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:47.256655+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:48.256808+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:49.256935+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:50.257104+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 376832 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:51.257235+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 376832 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:52.257497+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 376832 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:53.257619+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 368640 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:54.257744+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 360448 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:55.257989+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 344064 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:56.258159+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 344064 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:57.258304+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:58.258446+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:43:59.258598+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:00.258732+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 319488 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:01.258873+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 319488 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:02.259006+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 319488 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:03.259182+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:04.259440+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:05.259695+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 294912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:06.259916+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 294912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:07.260118+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 286720 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:08.260317+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 286720 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:09.260648+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 286720 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:10.260848+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 278528 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:11.261488+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 278528 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:12.261702+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 270336 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:13.261848+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 270336 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:14.261931+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 270336 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:15.262070+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 262144 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:16.262192+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 262144 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:17.262325+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 262144 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:18.262448+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 253952 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:19.262608+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 245760 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:20.262809+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 237568 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:21.262983+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 237568 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:22.263105+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 237568 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:23.263340+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 229376 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:24.263553+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 229376 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:25.263794+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 229376 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:26.263944+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 221184 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:27.264076+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 221184 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:28.264244+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 212992 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:29.264453+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 212992 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:30.264605+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 196608 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:31.264753+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 196608 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:32.264895+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 196608 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:33.265018+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 188416 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:34.265254+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 180224 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:35.265476+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 180224 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:36.265689+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 172032 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:37.265854+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 172032 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:38.266006+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 163840 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:39.266140+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 155648 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:40.266298+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 155648 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:41.266446+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 147456 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:42.266646+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 147456 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:43.266792+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 139264 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:44.266920+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 131072 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:45.267155+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 122880 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:46.267329+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 122880 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:47.267536+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 122880 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:48.267696+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 114688 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:49.267847+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 114688 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:50.267970+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5543 writes, 24K keys, 5543 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5543 writes, 847 syncs, 6.54 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5543 writes, 24K keys, 5543 commit groups, 1.0 writes per commit group, ingest: 18.63 MB, 0.03 MB/s
                                           Interval WAL: 5543 writes, 847 syncs, 6.54 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 57344 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:51.268116+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 49152 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:52.268254+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 49152 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:53.268407+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 49152 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:54.268608+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 40960 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:55.268772+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 40960 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:56.268939+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 32768 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:57.269094+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 32768 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:58.269226+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 32768 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:44:59.269442+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 16384 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:00.269593+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 16384 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:01.269705+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 16384 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:02.269853+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 8192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:03.270022+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 8192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:04.270222+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1032192 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:05.270399+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1032192 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:06.270530+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1024000 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:07.270632+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1024000 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:08.270805+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:09.270967+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1007616 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:10.271104+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1007616 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:11.271268+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 999424 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:12.271419+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 999424 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:13.271583+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 999424 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:14.271741+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 983040 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:15.271909+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 983040 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:16.272040+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 983040 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:17.272173+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:18.272298+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:19.272436+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 966656 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:20.272586+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 966656 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:21.272729+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 958464 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:22.272868+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 958464 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:23.272996+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 958464 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:24.273159+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 942080 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:25.273348+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 942080 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:26.273487+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 942080 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:27.273659+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 933888 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:28.273781+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 933888 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:29.273901+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 925696 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:30.274016+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 925696 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:31.274124+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 917504 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:32.274244+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 917504 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:33.274377+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 917504 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:34.274493+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 909312 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:35.274588+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:36.274725+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 909312 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:37.274851+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 909312 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:38.275036+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 892928 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:39.275158+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 892928 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:40.275322+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 884736 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:41.275504+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 884736 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:42.275652+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 884736 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:43.275806+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 876544 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:44.275934+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 876544 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:45.276096+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 868352 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:46.276230+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 860160 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:47.276363+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 860160 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:48.276554+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 860160 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:49.276754+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 851968 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:50.276899+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 835584 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:51.277114+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 827392 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:52.277309+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 827392 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:53.277625+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 827392 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:54.277880+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 819200 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:55.278145+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 819200 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:56.278330+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 811008 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:57.278466+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 811008 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:58.278613+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 811008 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:45:59.278734+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 802816 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:00.278917+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 802816 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:01.279058+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 802816 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:02.279195+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 794624 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 297.652160645s of 297.673522949s, submitted: 8
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:03.279315+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 1589248 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:04.279543+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 1318912 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:05.279720+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 1318912 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:06.279840+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 1310720 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:07.279965+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 1310720 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:08.280088+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 1302528 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:09.280217+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1294336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:10.280368+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1269760 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:11.280495+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1269760 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:12.280612+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1269760 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:13.280741+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1269760 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:14.280892+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1269760 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:15.281077+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 1245184 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:16.281216+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 1245184 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:17.281347+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 1236992 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:18.281481+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 1228800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:19.281665+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 1228800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:20.281819+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 1212416 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:21.281948+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 1204224 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:22.282068+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 1204224 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:23.282208+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 1196032 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:24.282321+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 1196032 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:25.282462+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1179648 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:26.282609+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1179648 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:27.282766+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 1171456 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:28.282911+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 1171456 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:29.283030+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 1171456 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:30.283176+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 1163264 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:31.283414+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 1163264 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:32.283606+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 1163264 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:33.283745+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 1155072 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:34.283883+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 1155072 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:35.284069+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1146880 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:36.284211+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1146880 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:37.284353+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1146880 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:38.284544+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 1138688 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:39.284741+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 1138688 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:40.284921+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 1114112 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:41.285080+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 1114112 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:42.285257+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 1105920 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:43.285404+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 1105920 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:44.285670+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 1105920 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:45.285954+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 1097728 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:46.286180+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 1097728 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:47.286364+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 1089536 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:48.286599+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 1089536 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:49.286806+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1081344 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:50.286980+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1081344 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:51.287147+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1081344 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:52.287293+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1073152 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:53.287431+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1073152 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:54.287621+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1073152 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:55.287796+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1064960 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:56.287923+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1064960 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:57.288126+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1064960 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:58.288247+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1064960 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:46:59.288414+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1064960 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:00.288532+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1048576 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:01.288666+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:02.288858+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:03.289064+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:04.289214+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:05.289369+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:06.289524+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:07.289610+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:08.289734+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:09.289912+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:10.290048+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1024000 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:11.290188+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1024000 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:12.290320+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1024000 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:13.290457+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1024000 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:14.290591+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1024000 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:15.290793+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:16.290908+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:17.291051+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:18.291273+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:19.291419+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:20.291630+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:21.291784+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:22.292026+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:23.292180+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:24.292330+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:25.292541+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:26.292705+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:27.292854+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:28.292989+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:29.294436+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:30.295295+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:31.295758+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:32.296713+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:33.297408+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:34.297764+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:35.297976+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:36.298126+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:37.298259+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:38.298398+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:39.298547+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 958464 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:40.298709+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:41.298959+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:42.299083+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:43.299236+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:44.299774+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:45.299946+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:46.300096+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:47.300278+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:48.300417+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:49.300547+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:50.300703+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:51.300874+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:52.301027+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:53.301157+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:54.301300+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 925696 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:55.301458+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 925696 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:56.301600+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 925696 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:57.301729+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 925696 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:58.301866+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 925696 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:59.302011+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 909312 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:00.302152+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 909312 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:01.302267+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:02.302404+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:03.302615+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:04.303018+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:05.303314+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:06.303479+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:07.303634+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:08.303754+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:09.303889+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:10.304052+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:11.304219+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:12.304428+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:13.304600+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:14.304784+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:15.305056+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 884736 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:16.305217+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 884736 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:17.305438+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 884736 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:18.305603+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 884736 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:19.305754+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 868352 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:20.305872+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:21.306016+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:22.306170+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:23.306302+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:24.306456+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:25.306689+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:26.306866+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:27.307064+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:28.307266+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:29.307481+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:30.307638+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 851968 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:31.307812+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 851968 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:32.307964+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 851968 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:33.308127+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 851968 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:34.308307+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 851968 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:35.308945+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 851968 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:36.309095+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 851968 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:37.309236+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 843776 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:38.309420+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 843776 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:39.309598+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:40.309766+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:41.309919+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:42.310046+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:43.310197+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:44.310360+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:45.310512+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:46.310674+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:47.310806+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:48.310962+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:49.311092+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:50.311245+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:51.311377+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:52.311578+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:53.311703+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:54.311811+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 819200 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:55.311990+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 819200 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:56.312112+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 819200 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:57.312248+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 819200 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:58.312437+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 819200 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:59.312639+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:00.312798+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:01.312911+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:02.313054+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:03.313245+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:04.313395+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:05.313624+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:06.313756+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:07.313890+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:08.314027+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:09.314183+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:10.314312+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:11.314537+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:12.314789+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:13.314916+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:14.315045+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 794624 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:15.315224+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 794624 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:16.315357+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 794624 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:17.315490+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 794624 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:18.315644+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 794624 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:19.315798+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 778240 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:20.315950+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 778240 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:21.316071+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 778240 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:22.316221+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 778240 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:23.316427+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 778240 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:24.316631+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 770048 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:25.316823+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 770048 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:26.316953+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 770048 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:27.317068+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 770048 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:28.317185+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:29.317317+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:30.317465+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:31.317653+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:32.317823+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:33.317970+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:34.318160+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:35.318371+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:36.318525+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:37.318660+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:38.318773+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:39.319628+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 745472 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:40.319794+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 745472 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:41.320006+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 745472 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:42.320652+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 745472 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:43.320767+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 745472 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:44.320959+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:45.321292+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:46.321395+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:47.321603+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:48.321740+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:49.321907+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:50.322134+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:51.322285+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:52.322525+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:53.322664+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:54.322875+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:55.323066+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:56.323198+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:57.323398+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:58.323543+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:59.323774+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc ms_handle_reset ms_handle_reset con 0x558773490000
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4161918394
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: get_auth_request con 0x558773307c00 auth_method 0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc handle_mgr_configure stats_period=5
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:00.323975+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:01.324128+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:02.324355+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:03.324707+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:04.324948+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 ms_handle_reset con 0x558772a55400 session 0x55877180d180
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: handle_auth_request added challenge on 0x558774ceb400
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:05.325117+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:06.325263+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:07.325434+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:08.325574+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:09.325690+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:10.325826+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:11.325965+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:12.326097+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:13.326385+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:14.326583+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:15.326708+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:16.326882+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:17.327017+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:18.327183+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:19.327324+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:20.327774+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:21.327905+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:22.328061+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:23.328208+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:24.328361+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:25.328577+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:26.328737+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:27.328922+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:28.329077+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:29.329228+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:30.329341+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:31.329486+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:32.329626+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:33.329754+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:34.329892+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:35.330054+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:36.330176+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:37.330302+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:38.330467+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:39.330609+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:40.330770+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:41.330899+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:42.331039+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:43.331204+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:44.331920+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:45.332064+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 212992 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:46.332694+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 212992 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:47.333112+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 212992 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:48.333523+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 212992 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:49.333729+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 212992 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:50.334000+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:51.334307+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:52.334489+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:53.334653+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:54.334900+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:55.335129+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:56.335198+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:57.335342+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:58.335483+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:59.335647+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:00.335825+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:01.335977+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:02.336175+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:03.336383+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 300.046569824s of 300.317047119s, submitted: 106
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: handle_auth_request added challenge on 0x558774ceb800
Jan 23 19:11:18 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:04.336541+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1007616 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:05.336753+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:06.336886+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:07.337017+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:08.337236+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:09.337470+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:10.337647+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:11.337825+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:12.338090+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:13.338258+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:14.338402+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:15.338583+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:16.338712+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:17.338829+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:18.338991+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:19.339202+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:20.339548+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:21.339708+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:22.339874+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:23.340021+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:24.340188+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:25.340361+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:26.342622+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:27.342740+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:28.342897+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:29.343049+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:30.343199+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:31.343305+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:32.343428+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:33.343662+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:34.343792+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:35.344022+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:36.344186+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:37.344391+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:38.344522+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:39.344733+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:40.344868+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:41.344995+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:42.345123+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:43.345311+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:44.345496+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:45.345668+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:46.345814+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:47.345931+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:48.346076+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:49.346292+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:50.346456+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:51.346662+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:52.346860+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:53.347015+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:54.347313+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:55.347606+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:56.347733+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:57.347962+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:58.348154+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:59.348350+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:00.348501+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:01.348626+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:02.348835+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:03.349031+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:04.349205+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:05.349451+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:06.349614+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:07.349746+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:08.349915+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:09.350037+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:10.350176+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:11.350311+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:12.350471+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:13.350663+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:14.350836+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:15.351076+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:16.351196+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:17.351337+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:18.351481+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:19.351622+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:20.351798+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:21.351953+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:22.352434+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:23.352692+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:24.353115+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:25.353351+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:26.353642+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:27.353966+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:28.354226+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:29.354548+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:30.354885+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:31.355251+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:32.355440+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:33.355679+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:34.355882+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:35.356122+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:36.356303+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:37.356445+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:38.356760+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:39.356967+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:40.357152+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:41.357381+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:42.357570+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:43.357722+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:44.357905+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:45.358072+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:46.358248+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:47.358427+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:48.358651+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:49.358775+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:50.358882+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:51.359022+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:52.359135+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:53.359252+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:54.359377+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:55.359567+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:56.359674+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:57.359801+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:58.359908+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:59.360067+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:00.360192+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:01.360498+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:02.360741+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:03.361016+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:04.361180+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:05.361387+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:06.361583+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:07.361680+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:08.361847+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:09.361999+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:10.362114+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:11.362216+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:12.362372+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:13.362528+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:14.362729+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:15.362925+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:16.363049+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:17.363199+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:18.363349+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:19.363533+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:20.363699+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:21.363866+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:22.363998+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:23.364125+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:24.364300+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:25.364488+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:26.364630+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:27.364758+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:28.364920+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:29.365112+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:30.365261+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:31.365433+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:32.365606+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:33.365744+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:34.365876+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:35.366037+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:36.366206+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:37.366368+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:38.366523+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:39.366656+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:40.366813+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:41.366967+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:42.367154+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:43.367300+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:44.367450+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:45.367693+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:46.367842+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:47.367966+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:48.368091+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:49.368257+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:50.368421+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:51.368658+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:52.368843+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:53.369053+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:54.369189+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:55.369407+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:56.369589+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:57.369710+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:58.369859+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:59.370059+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:00.370192+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:01.370328+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:02.370472+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:03.370633+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:04.370873+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:05.371107+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:06.371260+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:07.371406+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:08.371633+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:09.371811+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:10.371968+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:11.372126+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:12.372321+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:13.372471+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread fragmentation_score=0.000117 took=0.000012s
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:14.372740+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:15.373014+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:16.373217+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:17.373347+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:18.373520+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:19.373811+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:20.374086+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:21.374223+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:22.374382+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:23.374540+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:24.384482+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:25.384725+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:26.385052+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:27.385286+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:28.385497+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:29.386934+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:30.388045+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:31.388931+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:32.389609+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:33.390057+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:34.390245+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:35.390607+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:36.390850+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:37.391099+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:38.391281+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:39.391645+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:40.391981+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:41.392203+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:42.392442+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:43.392635+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:44.392783+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:45.393090+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:46.393518+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:47.393914+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:48.394073+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:49.394229+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:50.394469+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5791 writes, 24K keys, 5791 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5791 writes, 971 syncs, 5.96 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:51.394623+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1179648 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:52.394806+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1179648 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:53.394948+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1179648 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:54.395077+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1179648 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:55.395266+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1171456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:56.395431+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1171456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:57.395617+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1171456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:58.395775+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1171456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:59.395900+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1171456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:00.396065+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1171456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:01.396198+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1163264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:02.396411+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1163264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:03.396597+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1163264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:04.396752+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1163264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:05.396940+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1163264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:06.397181+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:07.397412+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:08.397533+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:09.397740+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:10.397898+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:11.398043+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:12.398229+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:13.398405+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:14.398600+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:15.398784+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:16.398930+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:17.399104+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:18.399290+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:19.399502+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:20.399734+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:21.399884+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:22.400088+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:23.400352+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:24.400708+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:25.400997+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:26.401267+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:27.401459+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:28.402437+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:29.402663+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:30.402910+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:31.403061+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:32.403259+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:33.403480+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:34.403709+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:35.403905+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:36.404173+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:37.404392+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:38.404553+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:39.404760+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:40.404935+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:41.405059+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:42.405215+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:43.405444+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:44.405657+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:45.405913+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:46.406130+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:47.406308+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:48.406531+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:49.406787+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:50.406914+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:51.407068+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:52.407195+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:53.407355+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:54.407510+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:55.407730+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:56.407946+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:57.408142+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:58.408354+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:59.408524+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:00.408740+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:01.408920+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:02.409148+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:03.409292+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 300.099670410s of 300.125030518s, submitted: 18
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 1089536 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:04.409422+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:05.409617+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 1974272 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:06.409753+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:07.409901+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:08.410061+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:09.410240+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:10.410454+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:11.410698+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:12.410899+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:13.411075+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:14.411243+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:15.411513+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:16.411673+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:17.411839+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:18.411985+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:19.412209+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:20.412668+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:21.412815+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:22.412976+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:23.413162+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:24.413747+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:25.413922+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:26.414052+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:27.414197+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:28.414324+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:29.414604+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:30.414877+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:31.415054+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:32.415237+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:33.415436+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:34.415586+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:35.415801+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:36.415931+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:37.416130+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:38.416275+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:39.416454+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:40.416695+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:41.416861+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:42.416981+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:43.417108+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:44.417258+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:45.417452+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:46.417612+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:47.417795+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:48.417897+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:49.418027+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:50.418163+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:51.418319+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:52.418437+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:53.418608+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:54.418761+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:55.418902+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:56.419086+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:57.419226+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:58.419416+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:59.419574+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:00.419740+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:01.419925+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:02.420121+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:03.420303+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:04.420440+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:05.420637+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:06.420804+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:07.420942+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:08.421120+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:09.421278+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:10.421377+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:11.421528+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:12.421635+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:13.421766+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:14.421867+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:15.422044+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:16.422188+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:17.422317+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:18.422443+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:19.422627+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:20.422802+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:21.422936+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:22.423047+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:23.423160+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:24.423311+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:25.423459+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:26.423651+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:27.423829+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:28.423992+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:29.424150+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:30.424255+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:31.424497+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:32.424646+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:33.424853+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:34.424969+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:35.425132+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:36.425309+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14630 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:37.425440+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:38.425566+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:39.425766+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:40.425892+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:41.426008+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:42.426221+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:43.426367+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:44.426519+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:45.426925+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:46.427097+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:47.427231+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:48.427401+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:49.427607+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:50.427770+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:51.427939+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:52.432549+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:53.432756+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:54.432879+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:55.433177+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:56.433342+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:57.433463+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:58.433641+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:59.433748+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:00.433931+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:01.434031+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:02.434212+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:03.434383+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:04.434510+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:05.434769+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:06.434959+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:07.435094+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:08.435231+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:09.435407+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:10.435531+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:11.435695+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:12.435848+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:13.435993+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:14.436437+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:15.436762+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:16.437078+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:17.437296+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:18.437519+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:19.437691+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:20.437969+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:21.438113+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:22.438357+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:23.438539+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:24.438804+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:25.439216+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:26.439431+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:27.439675+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:28.439848+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:29.440080+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:30.440269+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:31.440477+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:32.440723+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:33.440941+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:34.441116+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:35.441290+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:36.441494+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:37.441682+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:38.441883+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:39.442063+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:40.442263+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:41.442435+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:42.442683+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:43.442942+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:44.443154+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:45.443384+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:46.443651+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:47.443893+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:48.444120+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:49.444306+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:50.444500+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:51.444717+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:52.444934+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:53.445284+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:54.445491+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:55.445766+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:56.445953+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:57.446125+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:58.446299+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:59.446636+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 118 handle_osd_map epochs [118,119], i have 119, src has [1,119]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 175.431274414s of 176.003372192s, submitted: 106
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 1941504 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:00.446914+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 1941504 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb6000/0x0/0x4ffc00000, data 0xb65be/0x172000, compress 0x0/0x0/0x0, omap 0x155f8, meta 0x2bbaa08), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:01.447122+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 1933312 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984724 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:02.447343+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 1933312 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: handle_auth_request added challenge on 0x55877572c400
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 121 ms_handle_reset con 0x55877572c400 session 0x55877594a8c0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 121 heartbeat osd_stat(store_statfs(0x4fceb6000/0x0/0x4ffc00000, data 0xb65be/0x172000, compress 0x0/0x0/0x0, omap 0x155f8, meta 0x2bbaa08), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:03.447639+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 1753088 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: handle_auth_request added challenge on 0x5587760a4000
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:04.447839+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 10878976 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 121 ms_handle_reset con 0x5587760a4000 session 0x5587731b3180
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:05.448115+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:06.448343+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012426 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:07.448544+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fca41000/0x0/0x4ffc00000, data 0x529d51/0x5e9000, compress 0x0/0x0/0x0, omap 0x15b42, meta 0x2bba4be), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:08.448798+0000)
Jan 23 19:11:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:09.449046+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:10.449271+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.254834175s of 11.334155083s, submitted: 17
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:11.449512+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017830 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:12.449681+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:13.449848+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fca3e000/0x0/0x4ffc00000, data 0x52b7d0/0x5ec000, compress 0x0/0x0/0x0, omap 0x15e3e, meta 0x2bba1c2), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:14.450050+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:15.450287+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:16.450491+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017830 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:17.450710+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:18.450889+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fca3e000/0x0/0x4ffc00000, data 0x52b7d0/0x5ec000, compress 0x0/0x0/0x0, omap 0x15e3e, meta 0x2bba1c2), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:19.451056+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:20.451275+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:21.451499+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017830 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:22.451710+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fca3e000/0x0/0x4ffc00000, data 0x52b7d0/0x5ec000, compress 0x0/0x0/0x0, omap 0x15e3e, meta 0x2bba1c2), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:23.451876+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fca3e000/0x0/0x4ffc00000, data 0x52b7d0/0x5ec000, compress 0x0/0x0/0x0, omap 0x15e3e, meta 0x2bba1c2), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:24.452087+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:25.452472+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Got map version 10
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fca3e000/0x0/0x4ffc00000, data 0x52b7d0/0x5ec000, compress 0x0/0x0/0x0, omap 0x15e3e, meta 0x2bba1c2), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:26.452674+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 10829824 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017830 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:27.452893+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 10829824 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:28.453057+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 10829824 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:29.453276+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 10829824 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:30.453424+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 10829824 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fca3e000/0x0/0x4ffc00000, data 0x52b7d0/0x5ec000, compress 0x0/0x0/0x0, omap 0x15e3e, meta 0x2bba1c2), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:31.453621+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 10829824 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017830 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:32.453810+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 10829824 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:33.453979+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 10829824 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Got map version 11
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:34.454175+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 10903552 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fca3e000/0x0/0x4ffc00000, data 0x52b7d0/0x5ec000, compress 0x0/0x0/0x0, omap 0x15e3e, meta 0x2bba1c2), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:35.454520+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.813196182s of 24.820951462s, submitted: 9
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 10903552 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:36.454718+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 10862592 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1020764 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:37.454870+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 10854400 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:38.455035+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 10854400 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:39.455202+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 10854400 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:40.455344+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fca3b000/0x0/0x4ffc00000, data 0x52d3d5/0x5ef000, compress 0x0/0x0/0x0, omap 0x160e8, meta 0x2bb9f18), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:41.455473+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fca38000/0x0/0x4ffc00000, data 0x52ee54/0x5f2000, compress 0x0/0x0/0x0, omap 0x16345, meta 0x2bb9cbb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023378 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:42.455618+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:43.455770+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:44.456006+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fca38000/0x0/0x4ffc00000, data 0x52ee54/0x5f2000, compress 0x0/0x0/0x0, omap 0x16345, meta 0x2bb9cbb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:45.456200+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:46.456358+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023522 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:47.456485+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:48.456649+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:49.456840+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.629572868s of 14.033327103s, submitted: 35
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:50.456967+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fca3a000/0x0/0x4ffc00000, data 0x52ee54/0x5f2000, compress 0x0/0x0/0x0, omap 0x16345, meta 0x2bb9cbb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:51.457102+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fca3a000/0x0/0x4ffc00000, data 0x52ee54/0x5f2000, compress 0x0/0x0/0x0, omap 0x16345, meta 0x2bb9cbb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022818 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:52.457260+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:53.457418+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:54.457726+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fca3a000/0x0/0x4ffc00000, data 0x52ee54/0x5f2000, compress 0x0/0x0/0x0, omap 0x16345, meta 0x2bb9cbb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:55.457972+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:56.458153+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022818 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:57.458354+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:58.458540+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:59.458772+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fca3a000/0x0/0x4ffc00000, data 0x52ee54/0x5f2000, compress 0x0/0x0/0x0, omap 0x16345, meta 0x2bb9cbb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:00.459004+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:01.459222+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022818 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:02.459393+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:03.459595+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.991110802s of 13.999632835s, submitted: 4
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fca3a000/0x0/0x4ffc00000, data 0x52ee54/0x5f2000, compress 0x0/0x0/0x0, omap 0x16345, meta 0x2bb9cbb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:04.459824+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:05.460067+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:06.460273+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026312 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:07.460437+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 10764288 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:08.460764+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 10764288 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fca35000/0x0/0x4ffc00000, data 0x530a59/0x5f5000, compress 0x0/0x0/0x0, omap 0x165f2, meta 0x2bb9a0e), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:09.460981+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 10764288 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:10.461123+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 10764288 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 126 handle_osd_map epochs [126,127], i have 127, src has [1,127]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x530a59/0x5f5000, compress 0x0/0x0/0x0, omap 0x165f2, meta 0x2bb9a0e), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:11.461325+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 10739712 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028926 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:12.461626+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 10739712 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:13.461824+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 10739712 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fca32000/0x0/0x4ffc00000, data 0x5324d8/0x5f8000, compress 0x0/0x0/0x0, omap 0x16920, meta 0x2bb96e0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:14.461972+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fca32000/0x0/0x4ffc00000, data 0x5324d8/0x5f8000, compress 0x0/0x0/0x0, omap 0x16920, meta 0x2bb96e0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 10739712 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:15.462204+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 10739712 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:16.462357+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 10739712 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028926 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:17.462618+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 10739712 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:18.462831+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 10739712 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:19.463189+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:20.463391+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fca32000/0x0/0x4ffc00000, data 0x5324d8/0x5f8000, compress 0x0/0x0/0x0, omap 0x16920, meta 0x2bb96e0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:21.463642+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028926 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fca32000/0x0/0x4ffc00000, data 0x5324d8/0x5f8000, compress 0x0/0x0/0x0, omap 0x16920, meta 0x2bb96e0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:22.463895+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fca32000/0x0/0x4ffc00000, data 0x5324d8/0x5f8000, compress 0x0/0x0/0x0, omap 0x16920, meta 0x2bb96e0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:23.464037+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:24.465435+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:25.466355+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fca32000/0x0/0x4ffc00000, data 0x5324d8/0x5f8000, compress 0x0/0x0/0x0, omap 0x16920, meta 0x2bb96e0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:26.467290+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028926 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:27.467683+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fca32000/0x0/0x4ffc00000, data 0x5324d8/0x5f8000, compress 0x0/0x0/0x0, omap 0x16920, meta 0x2bb96e0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:28.467997+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:29.468168+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: handle_auth_request added challenge on 0x5587760a4400
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.003675461s of 25.996925354s, submitted: 34
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 10567680 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:30.468406+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 10567680 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:31.468540+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 10559488 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030058 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:32.468729+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 10559488 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:33.469071+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fca33000/0x0/0x4ffc00000, data 0x532573/0x5f9000, compress 0x0/0x0/0x0, omap 0x16920, meta 0x2bb96e0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 10559488 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:34.469234+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 10559488 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:35.469415+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 10559488 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:36.469573+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 10551296 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1031844 data_alloc: 218103808 data_used: 4433
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:37.469706+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fca2f000/0x0/0x4ffc00000, data 0x53413d/0x5fb000, compress 0x0/0x0/0x0, omap 0x16bd1, meta 0x2bb942f), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 128 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fca2f000/0x0/0x4ffc00000, data 0x53413d/0x5fb000, compress 0x0/0x0/0x0, omap 0x16bd1, meta 0x2bb942f), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 9486336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:38.469884+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 9469952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:39.470050+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fca29000/0x0/0x4ffc00000, data 0x535dfd/0x5ff000, compress 0x0/0x0/0x0, omap 0x16ddf, meta 0x2bb9221), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.783244133s of 10.030097008s, submitted: 63
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 9469952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:40.470192+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 9469952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:41.470332+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 9469952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035734 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:42.470491+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 9469952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:43.470622+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 9461760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:44.470859+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 9527296 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:45.471022+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fca2d000/0x0/0x4ffc00000, data 0x535dfd/0x5ff000, compress 0x0/0x0/0x0, omap 0x16ddf, meta 0x2bb9221), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 9527296 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fca2d000/0x0/0x4ffc00000, data 0x535dfd/0x5ff000, compress 0x0/0x0/0x0, omap 0x16ddf, meta 0x2bb9221), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:46.471161+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 9461760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042452 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:47.471294+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 9412608 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:48.471502+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79511552 unmapped: 9404416 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:49.471660+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79511552 unmapped: 9404416 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fca25000/0x0/0x4ffc00000, data 0x5396ef/0x605000, compress 0x0/0x0/0x0, omap 0x17424, meta 0x2bb8bdc), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.577225685s of 10.690270424s, submitted: 59
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:50.471821+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79511552 unmapped: 9404416 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:51.472166+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 9396224 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050872 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:52.472315+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 9396224 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:53.472488+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53cfc1/0x60b000, compress 0x0/0x0/0x0, omap 0x179f5, meta 0x2bb860b), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 9396224 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:54.472673+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 9396224 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:55.472855+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 9371648 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:56.472970+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 9355264 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047996 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:57.473107+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 9322496 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fca22000/0x0/0x4ffc00000, data 0x53cf26/0x60a000, compress 0x0/0x0/0x0, omap 0x179f5, meta 0x2bb860b), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:58.473308+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 9322496 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:59.473499+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 9322496 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:00.473661+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 9314304 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.739472389s of 10.844255447s, submitted: 58
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:01.473802+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051346 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:02.473926+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53e9e5/0x60d000, compress 0x0/0x0/0x0, omap 0x17e2c, meta 0x2bb81d4), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:03.474059+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53e9e5/0x60d000, compress 0x0/0x0/0x0, omap 0x17e2c, meta 0x2bb81d4), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:04.474213+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:05.474411+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:06.474653+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051346 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:07.474861+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:08.474997+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53e9e5/0x60d000, compress 0x0/0x0/0x0, omap 0x17e2c, meta 0x2bb81d4), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:09.475146+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:10.475291+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:11.475551+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051346 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:12.475838+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:13.476033+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53e9e5/0x60d000, compress 0x0/0x0/0x0, omap 0x17e2c, meta 0x2bb81d4), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53e9e5/0x60d000, compress 0x0/0x0/0x0, omap 0x17e2c, meta 0x2bb81d4), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:14.476202+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:15.476378+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:16.476590+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051346 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:17.476720+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53e9e5/0x60d000, compress 0x0/0x0/0x0, omap 0x17e2c, meta 0x2bb81d4), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:18.476854+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:19.476990+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:20.477124+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.109880447s of 20.121318817s, submitted: 11
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:21.477247+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 9273344 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054170 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:22.477421+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 9273344 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:23.477598+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53eb1b/0x60f000, compress 0x0/0x0/0x0, omap 0x17e2c, meta 0x2bb81d4), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 9273344 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:24.477724+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53eb1b/0x60f000, compress 0x0/0x0/0x0, omap 0x17e2c, meta 0x2bb81d4), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:25.477868+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 9232384 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:26.478012+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 134 handle_osd_map epochs [134,135], i have 135, src has [1,135]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 9216000 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058318 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:27.478140+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca18000/0x0/0x4ffc00000, data 0x5406b5/0x612000, compress 0x0/0x0/0x0, omap 0x180e9, meta 0x2bb7f17), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 9216000 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:28.478280+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 9216000 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:29.478396+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 9216000 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:30.478523+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:31.478660+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059768 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:32.478805+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:33.478933+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x542069/0x613000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:34.479113+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:35.479314+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:36.479447+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059768 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:37.479604+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:38.479729+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x542069/0x613000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:39.479873+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:40.479999+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x542069/0x613000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:41.480167+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x542069/0x613000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059768 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:42.480291+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.286027908s of 21.392595291s, submitted: 50
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:43.480421+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x542069/0x613000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:44.480667+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:45.480821+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 9240576 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:46.480932+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 9240576 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:47.481064+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062576 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 9240576 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:48.481195+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x54219f/0x615000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 9240576 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:49.481324+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 9240576 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:50.481451+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 9216000 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:51.481615+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 9216000 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:52.481802+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064268 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x5421fe/0x616000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.978779793s of 10.000945091s, submitted: 8
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 9216000 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:53.481983+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 9216000 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:54.482139+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 9191424 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:55.482315+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 9191424 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:56.482482+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 9191424 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:57.482621+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066630 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 9191424 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:58.482780+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca15000/0x0/0x4ffc00000, data 0x542299/0x617000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 9191424 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:59.482924+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 9191424 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:00.483062+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 9175040 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:01.483237+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 9142272 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:02.483448+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064794 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.141513824s of 10.157646179s, submitted: 7
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 9142272 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:03.483581+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x5421fe/0x616000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 9142272 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:04.483758+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 9076736 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:05.484432+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca15000/0x0/0x4ffc00000, data 0x5421cf/0x616000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 9076736 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:06.484590+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 9076736 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:07.484731+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064954 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x5421cf/0x616000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 9060352 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:08.484896+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 9060352 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:09.485096+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 9052160 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:10.485285+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 9043968 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:11.485459+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 9043968 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:12.485605+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064236 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 9043968 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:13.485723+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x542134/0x615000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 9043968 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:14.485842+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.637335777s of 11.865023613s, submitted: 11
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 9043968 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:15.485977+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Got map version 12
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x542134/0x615000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 7938048 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:16.486100+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:17.486314+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 7938048 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064236 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x542134/0x615000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:18.486536+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 7929856 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x542134/0x615000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:19.486701+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 7929856 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:20.486854+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 7929856 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:21.487000+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 136 handle_osd_map epochs [136,137], i have 137, src has [1,137]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:22.487161+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 7929856 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067554 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:23.487325+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:24.487495+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca13000/0x0/0x4ffc00000, data 0x543d09/0x617000, compress 0x0/0x0/0x0, omap 0x1864c, meta 0x2bb79b4), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:25.487712+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:26.487925+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:27.488091+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067108 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.785848618s of 13.113835335s, submitted: 29
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca13000/0x0/0x4ffc00000, data 0x543d09/0x617000, compress 0x0/0x0/0x0, omap 0x1864c, meta 0x2bb79b4), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:28.488230+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:29.488429+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 7913472 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:30.488643+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 7913472 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca15000/0x0/0x4ffc00000, data 0x543d09/0x617000, compress 0x0/0x0/0x0, omap 0x1864c, meta 0x2bb79b4), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:31.488773+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 7913472 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca10000/0x0/0x4ffc00000, data 0x545788/0x61a000, compress 0x0/0x0/0x0, omap 0x1896c, meta 0x2bb7694), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:32.488984+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 7913472 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069738 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:33.489195+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 7913472 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca10000/0x0/0x4ffc00000, data 0x545788/0x61a000, compress 0x0/0x0/0x0, omap 0x1896c, meta 0x2bb7694), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:34.489418+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 7913472 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:35.489605+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 7913472 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:36.489787+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 7913472 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:37.489980+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 7913472 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068604 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:38.490210+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 7905280 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:39.490424+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 7905280 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca13000/0x0/0x4ffc00000, data 0x5456ed/0x619000, compress 0x0/0x0/0x0, omap 0x1896c, meta 0x2bb7694), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca13000/0x0/0x4ffc00000, data 0x5456ed/0x619000, compress 0x0/0x0/0x0, omap 0x1896c, meta 0x2bb7694), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:40.490655+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 7905280 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.880651474s of 12.901146889s, submitted: 17
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca13000/0x0/0x4ffc00000, data 0x5456ed/0x619000, compress 0x0/0x0/0x0, omap 0x1896c, meta 0x2bb7694), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 138 handle_osd_map epochs [138,139], i have 139, src has [1,139]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:41.490791+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 7872512 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:42.490997+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 7872512 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072066 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:43.491141+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 7872512 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:44.491316+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 7872512 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:45.491503+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 7872512 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:46.491680+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 7872512 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fca10000/0x0/0x4ffc00000, data 0x547322/0x61c000, compress 0x0/0x0/0x0, omap 0x18c30, meta 0x2bb73d0), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 139 handle_osd_map epochs [140,140], i have 140, src has [1,140]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:47.491825+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074696 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:48.491989+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:49.492186+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:50.492346+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 140 handle_osd_map epochs [140,141], i have 141, src has [1,141]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.944893837s of 10.524827957s, submitted: 44
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:51.492516+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:52.492665+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fca07000/0x0/0x4ffc00000, data 0x54aa8d/0x623000, compress 0x0/0x0/0x0, omap 0x1921f, meta 0x2bb6de1), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079018 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:53.492853+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fca07000/0x0/0x4ffc00000, data 0x54aa8d/0x623000, compress 0x0/0x0/0x0, omap 0x1921f, meta 0x2bb6de1), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fca07000/0x0/0x4ffc00000, data 0x54aa8d/0x623000, compress 0x0/0x0/0x0, omap 0x1921f, meta 0x2bb6de1), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:54.493062+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:55.493310+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:56.493601+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:57.493794+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081792 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:58.493939+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:59.494097+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:00.494247+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:01.494399+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:02.494545+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081792 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.529521942s of 11.548330307s, submitted: 22
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:03.494716+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:04.494905+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:05.495128+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:06.495316+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:07.495542+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081936 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:08.495739+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:09.495885+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:10.496450+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:11.496662+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:12.496979+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081232 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:13.497193+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:14.497413+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.203193665s of 12.330730438s, submitted: 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:15.497672+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:16.497858+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:17.498043+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082908 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:18.498219+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:19.498420+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54c5a7/0x627000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:20.498600+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:21.498778+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:22.498923+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c642/0x628000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084456 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:23.499060+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:24.499243+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:25.500276+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:26.500494+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca03000/0x0/0x4ffc00000, data 0x54c6dd/0x629000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:27.500743+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086004 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:28.500946+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.663117409s of 13.674868584s, submitted: 5
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:29.501211+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca03000/0x0/0x4ffc00000, data 0x54c6dd/0x629000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:30.501388+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 7503872 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:31.501630+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 7503872 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Got map version 13
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:32.501756+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085270 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:33.501904+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54c5a7/0x627000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:34.502043+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:35.502261+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:36.502423+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:37.502607+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 7462912 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c5d5/0x627000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085814 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:38.502763+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 7462912 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:39.502937+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 7462912 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.330520630s of 10.966326714s, submitted: 146
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:40.503087+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 7454720 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:41.503272+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 7454720 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54c5d3/0x627000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54c5d3/0x627000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:42.503459+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084090 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:43.503545+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:44.503672+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:45.503850+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:46.504012+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:47.504131+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084106 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:48.504258+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:49.504397+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:50.504553+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:51.504741+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.110977173s of 12.122784615s, submitted: 6
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:52.504893+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084106 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:53.505044+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:54.505195+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:55.505435+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:56.505700+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:57.505839+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084106 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:58.505981+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:59.506107+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:00.506259+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 7430144 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:01.506385+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 7430144 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:02.506634+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 7430144 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.884099960s of 10.897595406s, submitted: 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084122 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:03.506843+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 7430144 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:04.506992+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 7430144 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:05.507235+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 7430144 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:06.507428+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 7430144 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:07.507585+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 7430144 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084122 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:08.507738+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 7430144 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:09.507887+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 7421952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:10.508015+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 7421952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:11.508206+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 7421952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:12.508380+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54c5a7/0x627000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 7421952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085814 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:13.508518+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.703331947s of 10.725868225s, submitted: 6
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 7397376 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:14.508655+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 7397376 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:15.508818+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 7397376 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:16.508947+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 7397376 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:17.509083+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 7397376 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087330 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:18.509245+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca02000/0x0/0x4ffc00000, data 0x54c70a/0x629000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 7397376 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:19.509387+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 7405568 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:20.509472+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 7421952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:21.509604+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 7421952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:22.509764+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 7413760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088144 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:23.509881+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.101762772s of 10.280632019s, submitted: 10
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 7413760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:24.510046+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 7413760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:25.510189+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 7413760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:26.510343+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 7413760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:27.510468+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 7413760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086980 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:28.510619+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 7413760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:29.510770+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 7413760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:30.510950+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 7405568 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:31.511107+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 7405568 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:32.511261+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 7405568 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086022 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:33.511419+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 7536640 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:34.511632+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 7536640 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.128376961s of 11.193220139s, submitted: 8
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:35.511815+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c5d4/0x627000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 7536640 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:36.512023+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 7536640 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:37.512204+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 7536640 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087714 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:38.512488+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 7495680 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:39.512733+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 7495680 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:40.512903+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 7495680 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:41.513114+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54c5d2/0x627000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 7487488 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:42.513366+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 7479296 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088512 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:43.513633+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 7479296 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:44.513934+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c670/0x628000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:45.514214+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:46.514481+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c66e/0x628000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.873952866s of 11.819092751s, submitted: 15
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:47.514679+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089470 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:48.514944+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54c5a7/0x627000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:49.515209+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:50.515398+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7099 writes, 27K keys, 7099 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7099 writes, 1492 syncs, 4.76 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1308 writes, 3048 keys, 1308 commit groups, 1.0 writes per commit group, ingest: 1.71 MB, 0.00 MB/s
                                           Interval WAL: 1308 writes, 521 syncs, 2.51 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:51.515553+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:52.515832+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088880 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:53.516042+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:54.516312+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:55.516689+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:56.516852+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.971531868s of 10.009545326s, submitted: 5
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:57.517050+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088896 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:58.517267+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:59.517501+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc ms_handle_reset ms_handle_reset con 0x558773307c00
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4161918394
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: get_auth_request con 0x558772a55400 auth_method 0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc handle_mgr_configure stats_period=5
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 7208960 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:00.517725+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 7225344 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:01.517894+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 7225344 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:02.518111+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [0,0,1])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 7225344 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088896 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:03.518302+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 7225344 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:04.518491+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 ms_handle_reset con 0x558774ceb400 session 0x558773f09500
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: handle_auth_request added challenge on 0x5587760a5000
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 7094272 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:05.518728+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 7233536 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:06.518919+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 7233536 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:07.519099+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 7233536 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088896 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:08.519296+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.829608917s of 11.841785431s, submitted: 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [0,0,0,1])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 7069696 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:09.519541+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 6750208 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:10.519732+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 5988352 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:11.519911+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 2048000 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:12.520087+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87244800 unmapped: 1671168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101070 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb7f9000/0x0/0x4ffc00000, data 0x5b88e7/0x693000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x3d56a95), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:13.520249+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87080960 unmapped: 1835008 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:14.520399+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 1802240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:15.520595+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87359488 unmapped: 1556480 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:16.520782+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 1597440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:17.520960+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb7e0000/0x0/0x4ffc00000, data 0x5d2267/0x6ac000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x3d56a95), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 1597440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102366 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:18.521202+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 2.226122856s of 10.100378036s, submitted: 43
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 1654784 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:19.521398+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 1646592 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:20.521544+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb7a0000/0x0/0x4ffc00000, data 0x60e4ca/0x6ea000, compress 0x0/0x0/0x0, omap 0x19783, meta 0x3d5687d), peers [1,2] op hist [0,0,0,0,1])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87359488 unmapped: 1556480 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:21.521742+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 1622016 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:22.521914+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87326720 unmapped: 1589248 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107106 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:23.522055+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 974848 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:24.522194+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb76e000/0x0/0x4ffc00000, data 0x64287e/0x71e000, compress 0x0/0x0/0x0, omap 0x19783, meta 0x3d5687d), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 89235456 unmapped: 729088 heap: 89964544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:25.522352+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb6ee000/0x0/0x4ffc00000, data 0x6c0562/0x79d000, compress 0x0/0x0/0x0, omap 0x19783, meta 0x3d5687d), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 434176 heap: 89964544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:26.522515+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 434176 heap: 89964544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb6d4000/0x0/0x4ffc00000, data 0x6da2a6/0x7b7000, compress 0x0/0x0/0x0, omap 0x19783, meta 0x3d5687d), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:27.522667+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 143 handle_osd_map epochs [143,144], i have 144, src has [1,144]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 89284608 unmapped: 2777088 heap: 92061696 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117120 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:28.522800+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.659546852s of 10.024695396s, submitted: 107
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 89178112 unmapped: 2883584 heap: 92061696 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:29.522934+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 89284608 unmapped: 2777088 heap: 92061696 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:30.523118+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 88989696 unmapped: 3072000 heap: 92061696 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb687000/0x0/0x4ffc00000, data 0x724489/0x803000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:31.523577+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 3055616 heap: 92061696 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:32.523811+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb662000/0x0/0x4ffc00000, data 0x74bf73/0x82a000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 90324992 unmapped: 1736704 heap: 92061696 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129242 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:33.523992+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 90652672 unmapped: 1409024 heap: 92061696 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb63c000/0x0/0x4ffc00000, data 0x77037e/0x84f000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:34.524180+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 90669056 unmapped: 1392640 heap: 92061696 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:35.524420+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 2580480 heap: 93110272 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:36.524553+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb604000/0x0/0x4ffc00000, data 0x7a90c4/0x888000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [0,1])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 90685440 unmapped: 2424832 heap: 93110272 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:37.524745+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 90726400 unmapped: 2383872 heap: 93110272 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134428 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:38.524907+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.119792938s of 10.064953804s, submitted: 67
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 90726400 unmapped: 2383872 heap: 93110272 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:39.525172+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 2973696 heap: 94158848 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:40.525323+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb578000/0x0/0x4ffc00000, data 0x83745e/0x914000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 91226112 unmapped: 2932736 heap: 94158848 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:41.525449+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 91234304 unmapped: 2924544 heap: 94158848 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb575000/0x0/0x4ffc00000, data 0x83a809/0x917000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:42.525608+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 91521024 unmapped: 2637824 heap: 94158848 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134906 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:43.525751+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 91414528 unmapped: 2744320 heap: 94158848 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: handle_auth_request added challenge on 0x5587760a5400
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:44.525878+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 1425408 heap: 94158848 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:45.526026+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 2244608 heap: 95207424 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:46.526205+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 2244608 heap: 95207424 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:47.526325+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Got map version 14
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb4e6000/0x0/0x4ffc00000, data 0x8c77bc/0x9a6000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94052352 unmapped: 1155072 heap: 95207424 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150332 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:48.526461+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.701053619s of 10.105629921s, submitted: 89
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 93642752 unmapped: 1564672 heap: 95207424 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:49.526670+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 93716480 unmapped: 1490944 heap: 95207424 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:50.526874+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 93716480 unmapped: 1490944 heap: 95207424 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:51.527089+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 93913088 unmapped: 2342912 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:52.527321+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 93913088 unmapped: 2342912 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148740 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:53.527504+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45d000/0x0/0x4ffc00000, data 0x94fd63/0xa2f000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 93913088 unmapped: 2342912 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:54.527673+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94076928 unmapped: 2179072 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:55.527855+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94076928 unmapped: 2179072 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:56.528113+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94076928 unmapped: 2179072 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:57.528243+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45a000/0x0/0x4ffc00000, data 0x94fd1f/0xa2f000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94076928 unmapped: 2179072 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146978 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:58.528392+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.438689232s of 10.236640930s, submitted: 38
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94085120 unmapped: 2170880 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:59.528519+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94085120 unmapped: 2170880 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:00.528653+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94093312 unmapped: 2162688 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:01.528783+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94101504 unmapped: 2154496 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:02.528913+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94101504 unmapped: 2154496 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148112 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:03.529067+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45b000/0x0/0x4ffc00000, data 0x94fd20/0xa2f000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94126080 unmapped: 2129920 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:04.529211+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 2097152 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:05.529398+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94240768 unmapped: 2015232 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:06.529606+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94175232 unmapped: 2080768 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:07.529774+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45b000/0x0/0x4ffc00000, data 0x94fe00/0xa30000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45b000/0x0/0x4ffc00000, data 0x94fe00/0xa30000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94175232 unmapped: 2080768 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149484 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:08.529976+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.595318317s of 10.068367004s, submitted: 91
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2048000 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:09.530127+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45c000/0x0/0x4ffc00000, data 0x94fe0e/0xa30000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:10.530287+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 1998848 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:11.530492+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 2105344 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:12.530739+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 2088960 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150618 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:13.530911+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 2088960 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:14.531078+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb458000/0x0/0x4ffc00000, data 0x94fe2e/0xa30000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 2088960 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:15.531253+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 2088960 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:16.531372+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45c000/0x0/0x4ffc00000, data 0x94fe2c/0xa30000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45c000/0x0/0x4ffc00000, data 0x94fe2c/0xa30000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 2072576 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:17.531508+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 2072576 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150012 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:18.531687+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 2072576 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:19.531868+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45a000/0x0/0x4ffc00000, data 0x94fd1e/0xa2f000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 2072576 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:20.532002+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 2072576 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:21.532153+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.511313438s of 12.622332573s, submitted: 87
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 2072576 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:22.532344+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 2072576 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147938 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:23.532483+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 2072576 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:24.532692+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2064384 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:25.532905+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb460000/0x0/0x4ffc00000, data 0x94fb8c/0xa2c000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2064384 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:26.533095+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2064384 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:27.533261+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2064384 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147332 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:28.533424+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2064384 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:29.533626+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45f000/0x0/0x4ffc00000, data 0x94fc2b/0xa2d000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45f000/0x0/0x4ffc00000, data 0x94fc2b/0xa2d000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2064384 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:30.533802+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2064384 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:31.533968+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.959608078s of 10.009652138s, submitted: 11
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2064384 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:32.534073+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb460000/0x0/0x4ffc00000, data 0x94fb90/0xa2c000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94199808 unmapped: 2056192 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148896 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:33.534224+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94199808 unmapped: 2056192 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:34.534346+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb460000/0x0/0x4ffc00000, data 0x94fb90/0xa2c000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94199808 unmapped: 2056192 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:35.534509+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 144 handle_osd_map epochs [144,145], i have 145, src has [1,145]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2048000 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:36.534636+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2048000 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:37.534784+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2048000 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:38.534943+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151784 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fb45c000/0x0/0x4ffc00000, data 0x9516fa/0xa2e000, compress 0x0/0x0/0x0, omap 0x19d9c, meta 0x3d56264), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2048000 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:39.535109+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2048000 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:40.535254+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2048000 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:41.535415+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2048000 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:42.535642+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.947566986s of 11.059520721s, submitted: 23
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 2039808 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:43.535851+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154414 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb459000/0x0/0x4ffc00000, data 0x953179/0xa31000, compress 0x0/0x0/0x0, omap 0x1a031, meta 0x3d55fcf), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 2039808 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:44.536063+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 2039808 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:45.536276+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb459000/0x0/0x4ffc00000, data 0x953179/0xa31000, compress 0x0/0x0/0x0, omap 0x1a031, meta 0x3d55fcf), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 2039808 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:46.536469+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 2039808 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:47.536636+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:48.536779+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 2039808 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154414 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb459000/0x0/0x4ffc00000, data 0x953179/0xa31000, compress 0x0/0x0/0x0, omap 0x1a031, meta 0x3d55fcf), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:49.536914+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:50.537060+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:51.537243+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb459000/0x0/0x4ffc00000, data 0x953179/0xa31000, compress 0x0/0x0/0x0, omap 0x1a031, meta 0x3d55fcf), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:52.537611+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:53.537818+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154414 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.746775627s of 10.756615639s, submitted: 11
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:54.538010+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:55.538199+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb459000/0x0/0x4ffc00000, data 0x953179/0xa31000, compress 0x0/0x0/0x0, omap 0x1a031, meta 0x3d55fcf), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:56.538335+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:57.538499+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:58.538651+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94240768 unmapped: 2015232 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153854 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:59.538799+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94240768 unmapped: 2015232 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb45b000/0x0/0x4ffc00000, data 0x953179/0xa31000, compress 0x0/0x0/0x0, omap 0x1a031, meta 0x3d55fcf), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:00.539038+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94240768 unmapped: 2015232 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:01.539180+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 2007040 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:02.539343+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 2007040 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb45a000/0x0/0x4ffc00000, data 0x95328b/0xa32000, compress 0x0/0x0/0x0, omap 0x1a031, meta 0x3d55fcf), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:03.539529+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1990656 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156504 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:04.539702+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1990656 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.206610680s of 11.225436211s, submitted: 8
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:05.539888+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1990656 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:06.540068+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1990656 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb45b000/0x0/0x4ffc00000, data 0x953179/0xa31000, compress 0x0/0x0/0x0, omap 0x1a031, meta 0x3d55fcf), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:07.540227+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1990656 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:08.540400+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1990656 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153838 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:09.540610+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1990656 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:10.540761+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1990656 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:11.540936+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb45b000/0x0/0x4ffc00000, data 0x953179/0xa31000, compress 0x0/0x0/0x0, omap 0x1a031, meta 0x3d55fcf), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1990656 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:12.541137+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1974272 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:13.541293+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1974272 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158880 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:14.541492+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1974272 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:15.541784+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1974272 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:16.541943+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1974272 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb455000/0x0/0x4ffc00000, data 0x954e19/0xa35000, compress 0x0/0x0/0x0, omap 0x1a302, meta 0x3d55cfe), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:17.542150+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1974272 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb455000/0x0/0x4ffc00000, data 0x954e19/0xa35000, compress 0x0/0x0/0x0, omap 0x1a302, meta 0x3d55cfe), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.126583099s of 13.255347252s, submitted: 23
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:18.542298+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1966080 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161510 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:19.542448+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1966080 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb452000/0x0/0x4ffc00000, data 0x956898/0xa38000, compress 0x0/0x0/0x0, omap 0x1a576, meta 0x3d55a8a), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:20.542619+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1966080 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:21.542792+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1966080 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:22.543050+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1957888 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:23.544156+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1957888 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160344 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb455000/0x0/0x4ffc00000, data 0x9567fd/0xa37000, compress 0x0/0x0/0x0, omap 0x1a576, meta 0x3d55a8a), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:24.544397+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1957888 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:25.545239+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1957888 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb455000/0x0/0x4ffc00000, data 0x9567fd/0xa37000, compress 0x0/0x0/0x0, omap 0x1a576, meta 0x3d55a8a), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:26.545983+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb455000/0x0/0x4ffc00000, data 0x9567fd/0xa37000, compress 0x0/0x0/0x0, omap 0x1a576, meta 0x3d55a8a), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1949696 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:27.546393+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1949696 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:28.546909+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1949696 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163838 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb450000/0x0/0x4ffc00000, data 0x958402/0xa3a000, compress 0x0/0x0/0x0, omap 0x1a84a, meta 0x3d557b6), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:29.547441+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1949696 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:30.547860+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1949696 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:31.548264+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1949696 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb450000/0x0/0x4ffc00000, data 0x958402/0xa3a000, compress 0x0/0x0/0x0, omap 0x1a84a, meta 0x3d557b6), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:32.548632+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1949696 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.989291191s of 15.049725533s, submitted: 32
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:33.548805+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 1884160 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166468 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:34.548935+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 1884160 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:35.549103+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 1884160 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:36.549253+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 1884160 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:37.549387+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fb44f000/0x0/0x4ffc00000, data 0x959e81/0xa3d000, compress 0x0/0x0/0x0, omap 0x1ab96, meta 0x3d5546a), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 1884160 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:38.549529+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 1884160 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165924 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:39.549826+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 1884160 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:40.550003+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:41.550176+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fb44f000/0x0/0x4ffc00000, data 0x959e81/0xa3d000, compress 0x0/0x0/0x0, omap 0x1ab96, meta 0x3d5546a), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:42.550316+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:43.551363+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165908 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:44.551600+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:45.551811+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:46.551972+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:47.552168+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fb44f000/0x0/0x4ffc00000, data 0x959e81/0xa3d000, compress 0x0/0x0/0x0, omap 0x1ab96, meta 0x3d5546a), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.829722404s of 14.890094757s, submitted: 14
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:48.552364+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165908 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:49.552552+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:50.552787+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fb44f000/0x0/0x4ffc00000, data 0x959e81/0xa3d000, compress 0x0/0x0/0x0, omap 0x1ab96, meta 0x3d5546a), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:51.552936+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94388224 unmapped: 1867776 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:52.553073+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 1859584 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb44c000/0x0/0x4ffc00000, data 0x95ba86/0xa40000, compress 0x0/0x0/0x0, omap 0x1adb3, meta 0x3d5524d), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:53.553204+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 1859584 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168666 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:54.553392+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 1859584 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:55.553622+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 1859584 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:56.553776+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 1859584 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:57.553963+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 1859584 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 151 handle_osd_map epochs [151,152], i have 152, src has [1,152]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.087899208s of 10.164388657s, submitted: 26
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:58.554132+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fb44c000/0x0/0x4ffc00000, data 0x95ba86/0xa40000, compress 0x0/0x0/0x0, omap 0x1adb3, meta 0x3d5524d), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 1851392 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172016 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:59.554312+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 1851392 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:00.554511+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95461376 unmapped: 1843200 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb442000/0x0/0x4ffc00000, data 0x95f10a/0xa46000, compress 0x0/0x0/0x0, omap 0x1b302, meta 0x3d54cfe), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:01.554658+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1818624 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:02.554881+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1818624 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Got map version 15
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:03.555076+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 1810432 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1175762 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:04.555234+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 1810432 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:05.555491+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 1810432 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:06.555651+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 1810432 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb445000/0x0/0x4ffc00000, data 0x95f1a5/0xa47000, compress 0x0/0x0/0x0, omap 0x1b302, meta 0x3d54cfe), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:07.555784+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95518720 unmapped: 1785856 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:08.555923+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95576064 unmapped: 1728512 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.972333908s of 10.498694420s, submitted: 93
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185732 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:09.556068+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95666176 unmapped: 1638400 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:10.556205+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95666176 unmapped: 1638400 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:11.556394+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fb43b000/0x0/0x4ffc00000, data 0x964595/0xa51000, compress 0x0/0x0/0x0, omap 0x1bb4d, meta 0x3d544b3), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95666176 unmapped: 1638400 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:12.556549+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95666176 unmapped: 1638400 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:13.556748+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95666176 unmapped: 1638400 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186844 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:14.556961+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96714752 unmapped: 589824 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:15.557173+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96722944 unmapped: 581632 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fb433000/0x0/0x4ffc00000, data 0x967b73/0xa55000, compress 0x0/0x0/0x0, omap 0x1c224, meta 0x3d53ddc), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:16.557356+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96722944 unmapped: 581632 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:17.557553+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96722944 unmapped: 581632 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:18.557754+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96722944 unmapped: 581632 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190082 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:19.557927+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96722944 unmapped: 581632 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:20.558112+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96722944 unmapped: 581632 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:21.558326+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fb433000/0x0/0x4ffc00000, data 0x967b73/0xa55000, compress 0x0/0x0/0x0, omap 0x1c224, meta 0x3d53ddc), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96722944 unmapped: 581632 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fb433000/0x0/0x4ffc00000, data 0x967b73/0xa55000, compress 0x0/0x0/0x0, omap 0x1c224, meta 0x3d53ddc), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:22.558536+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96722944 unmapped: 581632 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.193332672s of 14.270849228s, submitted: 49
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fb433000/0x0/0x4ffc00000, data 0x967b73/0xa55000, compress 0x0/0x0/0x0, omap 0x1c224, meta 0x3d53ddc), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:23.558682+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96387072 unmapped: 917504 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192536 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:24.558889+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96387072 unmapped: 917504 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:25.559095+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96387072 unmapped: 917504 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fb432000/0x0/0x4ffc00000, data 0x969612/0xa58000, compress 0x0/0x0/0x0, omap 0x1c612, meta 0x3d539ee), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:26.559261+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96387072 unmapped: 917504 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:27.559416+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96387072 unmapped: 917504 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:28.559548+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96387072 unmapped: 917504 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195166 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:29.559753+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96387072 unmapped: 917504 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:30.559942+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 160 ms_handle_reset con 0x5587760a5400 session 0x558775b77a40
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 671744 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:31.560135+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb431000/0x0/0x4ffc00000, data 0x96b217/0xa5b000, compress 0x0/0x0/0x0, omap 0x1c834, meta 0x3d537cc), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:32.560317+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Got map version 16
Jan 23 19:11:18 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14632 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.137248993s of 10.324272156s, submitted: 247
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:33.560533+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201338 data_alloc: 218103808 data_used: 5067
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:34.560750+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} v 0)
Jan 23 19:11:18 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} : dispatch
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:35.560970+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:36.561162+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fb42b000/0x0/0x4ffc00000, data 0x96e8f7/0xa61000, compress 0x0/0x0/0x0, omap 0x1cd9a, meta 0x3d53266), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:37.561323+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:38.561518+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 162 handle_osd_map epochs [162,163], i have 163, src has [1,163]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203776 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fb42b000/0x0/0x4ffc00000, data 0x96e8f7/0xa61000, compress 0x0/0x0/0x0, omap 0x1cd9a, meta 0x3d53266), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:39.561648+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:40.561838+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:41.562052+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:42.562247+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:43.562434+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203616 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:44.562676+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:45.562888+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:46.563144+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:47.563308+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:48.563472+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203616 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:49.563610+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:50.563774+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:51.563995+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:52.564124+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:53.564508+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203616 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:54.564691+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:55.564902+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:56.565087+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:57.565250+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:58.565392+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203616 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:59.565618+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:00.565792+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:01.565977+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:02.566166+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:03.566334+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.618175507s of 30.404338837s, submitted: 25
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205308 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:04.566524+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb425000/0x0/0x4ffc00000, data 0x970431/0xa65000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:05.566780+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:06.566935+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb425000/0x0/0x4ffc00000, data 0x970431/0xa65000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:07.567109+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:08.567277+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205308 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:09.567433+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:10.567729+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb425000/0x0/0x4ffc00000, data 0x970431/0xa65000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:11.567838+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:12.568003+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:13.568182+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206280 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.337510109s of 10.493035316s, submitted: 2
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x9704cc/0xa66000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [0,0,0,0,1])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:14.568428+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x970431/0xa65000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:15.568674+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb428000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:16.568850+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:17.569021+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:18.569169+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204030 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb428000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:19.569357+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:20.569550+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb428000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:21.569761+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:22.569931+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:23.570133+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb423000/0x0/0x4ffc00000, data 0x971f9b/0xa67000, compress 0x0/0x0/0x0, omap 0x1d3f4, meta 0x3d52c0c), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207492 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:24.570327+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:25.570519+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:26.570735+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:27.570965+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb423000/0x0/0x4ffc00000, data 0x971f9b/0xa67000, compress 0x0/0x0/0x0, omap 0x1d3f4, meta 0x3d52c0c), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:28.571117+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207492 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:29.571303+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:30.571487+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:31.571629+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:32.571781+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb423000/0x0/0x4ffc00000, data 0x971f9b/0xa67000, compress 0x0/0x0/0x0, omap 0x1d3f4, meta 0x3d52c0c), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:33.571948+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb423000/0x0/0x4ffc00000, data 0x971f9b/0xa67000, compress 0x0/0x0/0x0, omap 0x1d3f4, meta 0x3d52c0c), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207492 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:34.572113+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:35.572278+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:36.572551+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:37.572820+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:38.573020+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.002613068s of 24.275821686s, submitted: 34
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:39.573182+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:40.573320+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:41.573465+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:42.573620+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:43.573747+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:44.573937+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:45.574164+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:46.574349+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:47.574531+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:48.574742+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:49.574888+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:50.575054+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:51.575203+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:52.575376+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:53.575497+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:54.575736+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:55.575905+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:56.576129+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:57.576270+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:58.576482+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:59.576639+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:00.576777+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:01.576890+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:02.577060+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:03.577256+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:04.577509+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:05.577807+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:06.578021+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:07.578201+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:08.578438+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:09.578669+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:10.578808+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:11.578953+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:12.579141+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:13.579280+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:14.579419+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:15.579609+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:16.579756+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:17.579907+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:18.580067+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:19.580236+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:20.580379+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:21.580515+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:22.580698+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:23.580850+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:24.580969+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1695744 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:25.581119+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1695744 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:26.581255+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1695744 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:27.581481+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1695744 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:28.581671+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1695744 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:29.581858+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1695744 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:30.582084+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 52.165187836s of 52.175907135s, submitted: 10
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 ms_handle_reset con 0x5587760a4400 session 0x5587758f8380
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:31.582292+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:32.582448+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Got map version 17
Jan 23 19:11:18 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:33.582604+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:34.582718+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:35.582863+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:36.582994+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:37.583177+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:38.583365+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:39.583499+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:40.583654+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:41.591584+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:42.591746+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:43.591868+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:11:18 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:11:18 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:44.592017+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96903168 unmapped: 1449984 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:45.592188+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: do_command 'config diff' '{prefix=config diff}'
Jan 23 19:11:18 compute-0 ceph-osd[85953]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 23 19:11:18 compute-0 ceph-osd[85953]: do_command 'config show' '{prefix=config show}'
Jan 23 19:11:18 compute-0 ceph-osd[85953]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 23 19:11:18 compute-0 ceph-osd[85953]: do_command 'counter dump' '{prefix=counter dump}'
Jan 23 19:11:18 compute-0 ceph-osd[85953]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 23 19:11:18 compute-0 ceph-osd[85953]: do_command 'counter schema' '{prefix=counter schema}'
Jan 23 19:11:18 compute-0 ceph-osd[85953]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97394688 unmapped: 2007040 heap: 99401728 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:46.592313+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2195456 heap: 99401728 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:47.592428+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97509376 unmapped: 1892352 heap: 99401728 old mem: 2845415832 new mem: 2845415832
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:11:18 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:48.592539+0000)
Jan 23 19:11:18 compute-0 ceph-osd[85953]: do_command 'log dump' '{prefix=log dump}'
Jan 23 19:11:19 compute-0 ceph-mon[75097]: from='client.14624 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:19 compute-0 ceph-mon[75097]: from='client.14626 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:19 compute-0 ceph-mon[75097]: from='client.14628 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:19 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} : dispatch
Jan 23 19:11:19 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14636 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} v 0)
Jan 23 19:11:19 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} : dispatch
Jan 23 19:11:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:11:19 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14638 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 23 19:11:19 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3451406475' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 23 19:11:20 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14642 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Jan 23 19:11:20 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4021889983' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 23 19:11:20 compute-0 ceph-mon[75097]: from='client.14630 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:20 compute-0 ceph-mon[75097]: pgmap v1260: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:20 compute-0 ceph-mon[75097]: from='client.14632 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:20 compute-0 ceph-mon[75097]: from='client.14636 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:20 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} : dispatch
Jan 23 19:11:20 compute-0 ceph-mon[75097]: from='client.14638 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:20 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3451406475' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 23 19:11:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:20 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14646 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 23 19:11:21 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4177554019' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 23 19:11:21 compute-0 systemd[1]: Starting Hostname Service...
Jan 23 19:11:21 compute-0 systemd[1]: Started Hostname Service.
Jan 23 19:11:21 compute-0 ceph-mon[75097]: from='client.14642 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:21 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4021889983' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 23 19:11:21 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4177554019' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 23 19:11:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 23 19:11:21 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4013991252' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 23 19:11:21 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 19:11:21 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 19:11:21 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 19:11:21 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 19:11:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 23 19:11:22 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/355849157' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 23 19:11:22 compute-0 ceph-mon[75097]: pgmap v1261: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:22 compute-0 ceph-mon[75097]: from='client.14646 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:22 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4013991252' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 23 19:11:22 compute-0 ceph-mon[75097]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 19:11:22 compute-0 ceph-mon[75097]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 19:11:22 compute-0 ceph-mon[75097]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 19:11:22 compute-0 ceph-mon[75097]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 19:11:22 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/355849157' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 23 19:11:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:22 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14662 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 23 19:11:23 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3183849112' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 23 19:11:23 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Jan 23 19:11:23 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3637325853' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 23 19:11:23 compute-0 podman[257262]: 2026-01-23 19:11:23.99305986 +0000 UTC m=+0.077022932 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Jan 23 19:11:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:11:24 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3183849112' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 23 19:11:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 23 19:11:24 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3957178575' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 23 19:11:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 23 19:11:25 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/508654411' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 23 19:11:25 compute-0 ceph-mon[75097]: pgmap v1262: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:25 compute-0 ceph-mon[75097]: from='client.14662 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:25 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3637325853' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 23 19:11:25 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3957178575' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 23 19:11:25 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/508654411' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 23 19:11:25 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14672 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 23 19:11:26 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/582523214' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 23 19:11:26 compute-0 ceph-mon[75097]: pgmap v1263: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:26 compute-0 ceph-mon[75097]: from='client.14672 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:26 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/582523214' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 23 19:11:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Jan 23 19:11:26 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1564474717' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 23 19:11:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:27 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14678 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Jan 23 19:11:27 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2436027444' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 23 19:11:27 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1564474717' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 23 19:11:28 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14682 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:28 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14684 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:28 compute-0 ceph-mon[75097]: pgmap v1264: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:28 compute-0 ceph-mon[75097]: from='client.14678 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:28 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2436027444' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 23 19:11:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:11:28
Jan 23 19:11:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:11:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:11:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.control', '.mgr', 'vms', 'cephfs.cephfs.data']
Jan 23 19:11:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:11:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Jan 23 19:11:28 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2187657597' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Jan 23 19:11:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:11:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Jan 23 19:11:29 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/101265408' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Jan 23 19:11:29 compute-0 ceph-mon[75097]: from='client.14682 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:29 compute-0 ceph-mon[75097]: from='client.14684 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:29 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2187657597' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Jan 23 19:11:29 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/101265408' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Jan 23 19:11:29 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14690 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:29 compute-0 ovs-appctl[258589]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 23 19:11:30 compute-0 ovs-appctl[258597]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 23 19:11:30 compute-0 ovs-appctl[258608]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14692 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659032622064907 of space, bias 1.0, pg target 0.1997709786619472 quantized to 32 (current 32)
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005188356691283345 of space, bias 4.0, pg target 0.6226028029540014 quantized to 16 (current 16)
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:11:30 compute-0 ceph-mon[75097]: pgmap v1265: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Jan 23 19:11:30 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1709102252' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:11:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:11:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:11:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:11:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:11:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:11:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:11:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:11:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:11:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:11:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:11:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:11:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0)
Jan 23 19:11:31 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3738848179' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Jan 23 19:11:31 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14698 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:31 compute-0 ceph-mon[75097]: from='client.14690 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:31 compute-0 ceph-mon[75097]: from='client.14692 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:31 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1709102252' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Jan 23 19:11:31 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3738848179' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Jan 23 19:11:32 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14700 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:32 compute-0 sudo[259363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:11:32 compute-0 sudo[259363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:11:32 compute-0 sudo[259363]: pam_unix(sudo:session): session closed for user root
Jan 23 19:11:32 compute-0 sudo[259388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 19:11:32 compute-0 sudo[259388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:11:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 23 19:11:32 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2052590388' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 23 19:11:32 compute-0 ceph-mon[75097]: pgmap v1266: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:32 compute-0 ceph-mon[75097]: from='client.14698 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:32 compute-0 ceph-mon[75097]: from='client.14700 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:11:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:32 compute-0 sudo[259388]: pam_unix(sudo:session): session closed for user root
Jan 23 19:11:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:11:33 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:11:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 19:11:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:11:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 19:11:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:11:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 19:11:33 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:11:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 19:11:33 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:11:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:11:33 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:11:33 compute-0 sudo[259505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:11:33 compute-0 sudo[259505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:11:33 compute-0 sudo[259505]: pam_unix(sudo:session): session closed for user root
Jan 23 19:11:33 compute-0 sudo[259532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 19:11:33 compute-0 sudo[259532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:11:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Jan 23 19:11:33 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3177476667' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Jan 23 19:11:33 compute-0 podman[259597]: 2026-01-23 19:11:33.425102929 +0000 UTC m=+0.049698014 container create cf1e94244abc15200319d381aa6bf0deddc6c6b2430a253a4aa1785090448262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 23 19:11:33 compute-0 systemd[1]: Started libpod-conmon-cf1e94244abc15200319d381aa6bf0deddc6c6b2430a253a4aa1785090448262.scope.
Jan 23 19:11:33 compute-0 podman[259597]: 2026-01-23 19:11:33.402279852 +0000 UTC m=+0.026874957 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:11:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:11:33 compute-0 podman[259597]: 2026-01-23 19:11:33.525651725 +0000 UTC m=+0.150246830 container init cf1e94244abc15200319d381aa6bf0deddc6c6b2430a253a4aa1785090448262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bassi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:11:33 compute-0 podman[259597]: 2026-01-23 19:11:33.534626474 +0000 UTC m=+0.159221559 container start cf1e94244abc15200319d381aa6bf0deddc6c6b2430a253a4aa1785090448262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bassi, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 19:11:33 compute-0 podman[259597]: 2026-01-23 19:11:33.540742274 +0000 UTC m=+0.165337409 container attach cf1e94244abc15200319d381aa6bf0deddc6c6b2430a253a4aa1785090448262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bassi, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:11:33 compute-0 eager_bassi[259644]: 167 167
Jan 23 19:11:33 compute-0 systemd[1]: libpod-cf1e94244abc15200319d381aa6bf0deddc6c6b2430a253a4aa1785090448262.scope: Deactivated successfully.
Jan 23 19:11:33 compute-0 podman[259597]: 2026-01-23 19:11:33.543314467 +0000 UTC m=+0.167909562 container died cf1e94244abc15200319d381aa6bf0deddc6c6b2430a253a4aa1785090448262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bassi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 19:11:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-63eeaa60b4b53a4e1dbec4cecac13a8577d858d43a3f81931b2a9a9c2b41e674-merged.mount: Deactivated successfully.
Jan 23 19:11:33 compute-0 podman[259597]: 2026-01-23 19:11:33.599906208 +0000 UTC m=+0.224501293 container remove cf1e94244abc15200319d381aa6bf0deddc6c6b2430a253a4aa1785090448262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bassi, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:11:33 compute-0 systemd[1]: libpod-conmon-cf1e94244abc15200319d381aa6bf0deddc6c6b2430a253a4aa1785090448262.scope: Deactivated successfully.
Jan 23 19:11:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Jan 23 19:11:33 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1496744711' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Jan 23 19:11:33 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2052590388' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 23 19:11:33 compute-0 ceph-mon[75097]: pgmap v1267: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:11:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:11:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:11:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:11:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:11:33 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:11:33 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3177476667' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Jan 23 19:11:33 compute-0 podman[259672]: 2026-01-23 19:11:33.75761247 +0000 UTC m=+0.039562488 container create 30498ef65312a05c18d08b17a0683c8ad7c09e684b2013d2fd31e3a6b8d539bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_galileo, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 19:11:33 compute-0 systemd[1]: Started libpod-conmon-30498ef65312a05c18d08b17a0683c8ad7c09e684b2013d2fd31e3a6b8d539bd.scope.
Jan 23 19:11:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c3a429f800a9be2054338140b28f22d9df951a88346cfb33e643bd3b4ad385/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c3a429f800a9be2054338140b28f22d9df951a88346cfb33e643bd3b4ad385/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c3a429f800a9be2054338140b28f22d9df951a88346cfb33e643bd3b4ad385/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c3a429f800a9be2054338140b28f22d9df951a88346cfb33e643bd3b4ad385/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c3a429f800a9be2054338140b28f22d9df951a88346cfb33e643bd3b4ad385/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 19:11:33 compute-0 podman[259672]: 2026-01-23 19:11:33.7396178 +0000 UTC m=+0.021567838 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:11:33 compute-0 podman[259672]: 2026-01-23 19:11:33.845420894 +0000 UTC m=+0.127370922 container init 30498ef65312a05c18d08b17a0683c8ad7c09e684b2013d2fd31e3a6b8d539bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 23 19:11:33 compute-0 podman[259672]: 2026-01-23 19:11:33.852336562 +0000 UTC m=+0.134286570 container start 30498ef65312a05c18d08b17a0683c8ad7c09e684b2013d2fd31e3a6b8d539bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 23 19:11:33 compute-0 podman[259672]: 2026-01-23 19:11:33.860120623 +0000 UTC m=+0.142070661 container attach 30498ef65312a05c18d08b17a0683c8ad7c09e684b2013d2fd31e3a6b8d539bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_galileo, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 19:11:34 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14708 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:34 compute-0 nice_galileo[259693]: --> passed data devices: 0 physical, 3 LVM
Jan 23 19:11:34 compute-0 nice_galileo[259693]: --> All data devices are unavailable
Jan 23 19:11:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:11:34 compute-0 systemd[1]: libpod-30498ef65312a05c18d08b17a0683c8ad7c09e684b2013d2fd31e3a6b8d539bd.scope: Deactivated successfully.
Jan 23 19:11:34 compute-0 podman[259672]: 2026-01-23 19:11:34.358176226 +0000 UTC m=+0.640126244 container died 30498ef65312a05c18d08b17a0683c8ad7c09e684b2013d2fd31e3a6b8d539bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_galileo, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-47c3a429f800a9be2054338140b28f22d9df951a88346cfb33e643bd3b4ad385-merged.mount: Deactivated successfully.
Jan 23 19:11:34 compute-0 podman[259672]: 2026-01-23 19:11:34.416612323 +0000 UTC m=+0.698562341 container remove 30498ef65312a05c18d08b17a0683c8ad7c09e684b2013d2fd31e3a6b8d539bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_galileo, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:11:34 compute-0 systemd[1]: libpod-conmon-30498ef65312a05c18d08b17a0683c8ad7c09e684b2013d2fd31e3a6b8d539bd.scope: Deactivated successfully.
Jan 23 19:11:34 compute-0 sudo[259532]: pam_unix(sudo:session): session closed for user root
Jan 23 19:11:34 compute-0 sudo[259798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:11:34 compute-0 sudo[259798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:11:34 compute-0 sudo[259798]: pam_unix(sudo:session): session closed for user root
Jan 23 19:11:34 compute-0 sudo[259828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 19:11:34 compute-0 sudo[259828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:11:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:34 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1496744711' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Jan 23 19:11:34 compute-0 ceph-mon[75097]: from='client.14708 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Jan 23 19:11:34 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/331043847' entity='client.admin' cmd={"prefix": "df", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 23 19:11:34 compute-0 podman[259865]: 2026-01-23 19:11:34.909640424 +0000 UTC m=+0.066382763 container create 2ed9bbefdc5be5f6e30fe7b20e514f562ea215e36872d44148c9856a74803f2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_fermat, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:11:34 compute-0 podman[259865]: 2026-01-23 19:11:34.871165203 +0000 UTC m=+0.027907582 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:11:35 compute-0 systemd[1]: Started libpod-conmon-2ed9bbefdc5be5f6e30fe7b20e514f562ea215e36872d44148c9856a74803f2f.scope.
Jan 23 19:11:35 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:11:35 compute-0 podman[259865]: 2026-01-23 19:11:35.275084028 +0000 UTC m=+0.431826377 container init 2ed9bbefdc5be5f6e30fe7b20e514f562ea215e36872d44148c9856a74803f2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 19:11:35 compute-0 podman[259865]: 2026-01-23 19:11:35.284669561 +0000 UTC m=+0.441411900 container start 2ed9bbefdc5be5f6e30fe7b20e514f562ea215e36872d44148c9856a74803f2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_fermat, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 23 19:11:35 compute-0 podman[259865]: 2026-01-23 19:11:35.289585912 +0000 UTC m=+0.446328261 container attach 2ed9bbefdc5be5f6e30fe7b20e514f562ea215e36872d44148c9856a74803f2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 23 19:11:35 compute-0 systemd[1]: libpod-2ed9bbefdc5be5f6e30fe7b20e514f562ea215e36872d44148c9856a74803f2f.scope: Deactivated successfully.
Jan 23 19:11:35 compute-0 tender_fermat[259906]: 167 167
Jan 23 19:11:35 compute-0 conmon[259906]: conmon 2ed9bbefdc5be5f6e30f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2ed9bbefdc5be5f6e30fe7b20e514f562ea215e36872d44148c9856a74803f2f.scope/container/memory.events
Jan 23 19:11:35 compute-0 podman[259865]: 2026-01-23 19:11:35.293851556 +0000 UTC m=+0.450593895 container died 2ed9bbefdc5be5f6e30fe7b20e514f562ea215e36872d44148c9856a74803f2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_fermat, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 19:11:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd90452792b2b4caa82c5c47f4ec3f5b8367cf87dc12c7f7ecb04d7e803f5371-merged.mount: Deactivated successfully.
Jan 23 19:11:35 compute-0 podman[259865]: 2026-01-23 19:11:35.338415714 +0000 UTC m=+0.495158053 container remove 2ed9bbefdc5be5f6e30fe7b20e514f562ea215e36872d44148c9856a74803f2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_fermat, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 23 19:11:35 compute-0 systemd[1]: libpod-conmon-2ed9bbefdc5be5f6e30fe7b20e514f562ea215e36872d44148c9856a74803f2f.scope: Deactivated successfully.
Jan 23 19:11:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Jan 23 19:11:35 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/281044164' entity='client.admin' cmd={"prefix": "df", "format": "json-pretty"} : dispatch
Jan 23 19:11:35 compute-0 podman[259936]: 2026-01-23 19:11:35.507725059 +0000 UTC m=+0.051063118 container create c612fc43f73a33c6d12b17e8d1424f5b4689334663f2d57c33eb4a8de71685e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 23 19:11:35 compute-0 systemd[1]: Started libpod-conmon-c612fc43f73a33c6d12b17e8d1424f5b4689334663f2d57c33eb4a8de71685e4.scope.
Jan 23 19:11:35 compute-0 podman[259936]: 2026-01-23 19:11:35.485491896 +0000 UTC m=+0.028829965 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:11:35 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a628a2f633359d39b0119516049732d1f5a9718d11fabb9930f04095857c12ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a628a2f633359d39b0119516049732d1f5a9718d11fabb9930f04095857c12ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a628a2f633359d39b0119516049732d1f5a9718d11fabb9930f04095857c12ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a628a2f633359d39b0119516049732d1f5a9718d11fabb9930f04095857c12ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:11:35 compute-0 podman[259936]: 2026-01-23 19:11:35.598050385 +0000 UTC m=+0.141388474 container init c612fc43f73a33c6d12b17e8d1424f5b4689334663f2d57c33eb4a8de71685e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_faraday, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:11:35 compute-0 podman[259936]: 2026-01-23 19:11:35.609636847 +0000 UTC m=+0.152974916 container start c612fc43f73a33c6d12b17e8d1424f5b4689334663f2d57c33eb4a8de71685e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_faraday, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:11:35 compute-0 podman[259936]: 2026-01-23 19:11:35.614513157 +0000 UTC m=+0.157851206 container attach c612fc43f73a33c6d12b17e8d1424f5b4689334663f2d57c33eb4a8de71685e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 19:11:35 compute-0 ceph-mon[75097]: pgmap v1268: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:35 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/331043847' entity='client.admin' cmd={"prefix": "df", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 23 19:11:35 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/281044164' entity='client.admin' cmd={"prefix": "df", "format": "json-pretty"} : dispatch
Jan 23 19:11:35 compute-0 strange_faraday[259954]: {
Jan 23 19:11:35 compute-0 strange_faraday[259954]:     "0": [
Jan 23 19:11:35 compute-0 strange_faraday[259954]:         {
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "devices": [
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "/dev/loop3"
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             ],
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "lv_name": "ceph_lv0",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "lv_size": "21470642176",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "name": "ceph_lv0",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "tags": {
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.cluster_name": "ceph",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.crush_device_class": "",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.encrypted": "0",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.objectstore": "bluestore",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.osd_id": "0",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.type": "block",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.vdo": "0",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.with_tpm": "0"
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             },
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "type": "block",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "vg_name": "ceph_vg0"
Jan 23 19:11:35 compute-0 strange_faraday[259954]:         }
Jan 23 19:11:35 compute-0 strange_faraday[259954]:     ],
Jan 23 19:11:35 compute-0 strange_faraday[259954]:     "1": [
Jan 23 19:11:35 compute-0 strange_faraday[259954]:         {
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "devices": [
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "/dev/loop4"
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             ],
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "lv_name": "ceph_lv1",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "lv_size": "21470642176",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "name": "ceph_lv1",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "tags": {
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.cluster_name": "ceph",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.crush_device_class": "",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.encrypted": "0",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.objectstore": "bluestore",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.osd_id": "1",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.type": "block",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.vdo": "0",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.with_tpm": "0"
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             },
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "type": "block",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "vg_name": "ceph_vg1"
Jan 23 19:11:35 compute-0 strange_faraday[259954]:         }
Jan 23 19:11:35 compute-0 strange_faraday[259954]:     ],
Jan 23 19:11:35 compute-0 strange_faraday[259954]:     "2": [
Jan 23 19:11:35 compute-0 strange_faraday[259954]:         {
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "devices": [
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "/dev/loop5"
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             ],
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "lv_name": "ceph_lv2",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "lv_size": "21470642176",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "name": "ceph_lv2",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "tags": {
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.cluster_name": "ceph",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.crush_device_class": "",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.encrypted": "0",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.objectstore": "bluestore",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.osd_id": "2",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.type": "block",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.vdo": "0",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:                 "ceph.with_tpm": "0"
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             },
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "type": "block",
Jan 23 19:11:35 compute-0 strange_faraday[259954]:             "vg_name": "ceph_vg2"
Jan 23 19:11:35 compute-0 strange_faraday[259954]:         }
Jan 23 19:11:35 compute-0 strange_faraday[259954]:     ]
Jan 23 19:11:35 compute-0 strange_faraday[259954]: }
Jan 23 19:11:35 compute-0 systemd[1]: libpod-c612fc43f73a33c6d12b17e8d1424f5b4689334663f2d57c33eb4a8de71685e4.scope: Deactivated successfully.
Jan 23 19:11:35 compute-0 podman[259936]: 2026-01-23 19:11:35.946066384 +0000 UTC m=+0.489404443 container died c612fc43f73a33c6d12b17e8d1424f5b4689334663f2d57c33eb4a8de71685e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 19:11:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Jan 23 19:11:36 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3407436793' entity='client.admin' cmd={"prefix": "fs dump", "format": "json-pretty"} : dispatch
Jan 23 19:11:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-a628a2f633359d39b0119516049732d1f5a9718d11fabb9930f04095857c12ee-merged.mount: Deactivated successfully.
Jan 23 19:11:36 compute-0 podman[259936]: 2026-01-23 19:11:36.052183846 +0000 UTC m=+0.595521905 container remove c612fc43f73a33c6d12b17e8d1424f5b4689334663f2d57c33eb4a8de71685e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 23 19:11:36 compute-0 systemd[1]: libpod-conmon-c612fc43f73a33c6d12b17e8d1424f5b4689334663f2d57c33eb4a8de71685e4.scope: Deactivated successfully.
Jan 23 19:11:36 compute-0 sudo[259828]: pam_unix(sudo:session): session closed for user root
Jan 23 19:11:36 compute-0 sudo[260012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:11:36 compute-0 sudo[260012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:11:36 compute-0 sudo[260012]: pam_unix(sudo:session): session closed for user root
Jan 23 19:11:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Jan 23 19:11:36 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2161293695' entity='client.admin' cmd={"prefix": "fs ls", "format": "json-pretty"} : dispatch
Jan 23 19:11:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:36 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3407436793' entity='client.admin' cmd={"prefix": "fs dump", "format": "json-pretty"} : dispatch
Jan 23 19:11:36 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2161293695' entity='client.admin' cmd={"prefix": "fs ls", "format": "json-pretty"} : dispatch
Jan 23 19:11:36 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14718 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:37 compute-0 sudo[260060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 19:11:37 compute-0 sudo[260060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:11:37 compute-0 podman[260165]: 2026-01-23 19:11:37.385405434 +0000 UTC m=+0.048392963 container create c422141e9fb67339b859ac4e1f1dda24eeaae01b74eb3c2254a03ac5f2bb414e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:11:37 compute-0 systemd[1]: Started libpod-conmon-c422141e9fb67339b859ac4e1f1dda24eeaae01b74eb3c2254a03ac5f2bb414e.scope.
Jan 23 19:11:37 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:11:37 compute-0 podman[260165]: 2026-01-23 19:11:37.363282513 +0000 UTC m=+0.026270072 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:11:37 compute-0 podman[260165]: 2026-01-23 19:11:37.475715229 +0000 UTC m=+0.138702778 container init c422141e9fb67339b859ac4e1f1dda24eeaae01b74eb3c2254a03ac5f2bb414e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_liskov, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 23 19:11:37 compute-0 podman[260165]: 2026-01-23 19:11:37.482926615 +0000 UTC m=+0.145914144 container start c422141e9fb67339b859ac4e1f1dda24eeaae01b74eb3c2254a03ac5f2bb414e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_liskov, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 19:11:37 compute-0 podman[260165]: 2026-01-23 19:11:37.487366214 +0000 UTC m=+0.150353773 container attach c422141e9fb67339b859ac4e1f1dda24eeaae01b74eb3c2254a03ac5f2bb414e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_liskov, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:11:37 compute-0 gracious_liskov[260182]: 167 167
Jan 23 19:11:37 compute-0 systemd[1]: libpod-c422141e9fb67339b859ac4e1f1dda24eeaae01b74eb3c2254a03ac5f2bb414e.scope: Deactivated successfully.
Jan 23 19:11:37 compute-0 podman[260165]: 2026-01-23 19:11:37.489324761 +0000 UTC m=+0.152312300 container died c422141e9fb67339b859ac4e1f1dda24eeaae01b74eb3c2254a03ac5f2bb414e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_liskov, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 23 19:11:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-223491d90aa365af191743919de49ede65c5515bf5b2ee60d07ba75b69636188-merged.mount: Deactivated successfully.
Jan 23 19:11:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Jan 23 19:11:37 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/793503385' entity='client.admin' cmd={"prefix": "mds stat", "format": "json-pretty"} : dispatch
Jan 23 19:11:37 compute-0 podman[260165]: 2026-01-23 19:11:37.538964173 +0000 UTC m=+0.201951702 container remove c422141e9fb67339b859ac4e1f1dda24eeaae01b74eb3c2254a03ac5f2bb414e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_liskov, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:11:37 compute-0 systemd[1]: libpod-conmon-c422141e9fb67339b859ac4e1f1dda24eeaae01b74eb3c2254a03ac5f2bb414e.scope: Deactivated successfully.
Jan 23 19:11:37 compute-0 podman[260211]: 2026-01-23 19:11:37.715857194 +0000 UTC m=+0.049192632 container create 08748344c663d5ab3c2186f8f429a924ed3e9ebc00be3d7ff931d486c9d78cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_taussig, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 23 19:11:37 compute-0 systemd[1]: Started libpod-conmon-08748344c663d5ab3c2186f8f429a924ed3e9ebc00be3d7ff931d486c9d78cfa.scope.
Jan 23 19:11:37 compute-0 podman[260211]: 2026-01-23 19:11:37.695565318 +0000 UTC m=+0.028900786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:11:37 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:11:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c438221e7a30594989308808b822b16174a1b490d441ef35d42dff41cbf1c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:11:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c438221e7a30594989308808b822b16174a1b490d441ef35d42dff41cbf1c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:11:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c438221e7a30594989308808b822b16174a1b490d441ef35d42dff41cbf1c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:11:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c438221e7a30594989308808b822b16174a1b490d441ef35d42dff41cbf1c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:11:37 compute-0 podman[260211]: 2026-01-23 19:11:37.825172493 +0000 UTC m=+0.158507941 container init 08748344c663d5ab3c2186f8f429a924ed3e9ebc00be3d7ff931d486c9d78cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 23 19:11:37 compute-0 podman[260211]: 2026-01-23 19:11:37.83405794 +0000 UTC m=+0.167393378 container start 08748344c663d5ab3c2186f8f429a924ed3e9ebc00be3d7ff931d486c9d78cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_taussig, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:11:37 compute-0 podman[260211]: 2026-01-23 19:11:37.838024497 +0000 UTC m=+0.171359935 container attach 08748344c663d5ab3c2186f8f429a924ed3e9ebc00be3d7ff931d486c9d78cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_taussig, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 19:11:37 compute-0 ceph-mon[75097]: pgmap v1269: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:37 compute-0 ceph-mon[75097]: from='client.14718 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:37 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/793503385' entity='client.admin' cmd={"prefix": "mds stat", "format": "json-pretty"} : dispatch
Jan 23 19:11:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Jan 23 19:11:38 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3899074657' entity='client.admin' cmd={"prefix": "mon dump", "format": "json-pretty"} : dispatch
Jan 23 19:11:38 compute-0 lvm[260540]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:11:38 compute-0 lvm[260536]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:11:38 compute-0 lvm[260540]: VG ceph_vg1 finished
Jan 23 19:11:38 compute-0 lvm[260536]: VG ceph_vg0 finished
Jan 23 19:11:38 compute-0 lvm[260541]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:11:38 compute-0 lvm[260541]: VG ceph_vg2 finished
Jan 23 19:11:38 compute-0 fervent_taussig[260246]: {}
Jan 23 19:11:38 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14724 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:38 compute-0 systemd[1]: libpod-08748344c663d5ab3c2186f8f429a924ed3e9ebc00be3d7ff931d486c9d78cfa.scope: Deactivated successfully.
Jan 23 19:11:38 compute-0 systemd[1]: libpod-08748344c663d5ab3c2186f8f429a924ed3e9ebc00be3d7ff931d486c9d78cfa.scope: Consumed 1.286s CPU time, 37.0M memory peak, read 8.4M from disk, written 0B to disk.
Jan 23 19:11:38 compute-0 podman[260211]: 2026-01-23 19:11:38.619479221 +0000 UTC m=+0.952814659 container died 08748344c663d5ab3c2186f8f429a924ed3e9ebc00be3d7ff931d486c9d78cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Jan 23 19:11:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-64c438221e7a30594989308808b822b16174a1b490d441ef35d42dff41cbf1c7-merged.mount: Deactivated successfully.
Jan 23 19:11:38 compute-0 podman[260211]: 2026-01-23 19:11:38.659849307 +0000 UTC m=+0.993184745 container remove 08748344c663d5ab3c2186f8f429a924ed3e9ebc00be3d7ff931d486c9d78cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_taussig, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 23 19:11:38 compute-0 systemd[1]: libpod-conmon-08748344c663d5ab3c2186f8f429a924ed3e9ebc00be3d7ff931d486c9d78cfa.scope: Deactivated successfully.
Jan 23 19:11:38 compute-0 sudo[260060]: pam_unix(sudo:session): session closed for user root
Jan 23 19:11:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:11:38 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:11:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:11:38 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:11:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:38 compute-0 sudo[260610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 19:11:38 compute-0 sudo[260610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:11:38 compute-0 sudo[260610]: pam_unix(sudo:session): session closed for user root
Jan 23 19:11:38 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3899074657' entity='client.admin' cmd={"prefix": "mon dump", "format": "json-pretty"} : dispatch
Jan 23 19:11:38 compute-0 ceph-mon[75097]: from='client.14724 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:38 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:11:38 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:11:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Jan 23 19:11:39 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1741181524' entity='client.admin' cmd={"prefix": "osd blocklist ls", "format": "json-pretty"} : dispatch
Jan 23 19:11:39 compute-0 virtqemud[240019]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 23 19:11:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:11:39 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14728 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:39 compute-0 ceph-mon[75097]: pgmap v1270: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:39 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1741181524' entity='client.admin' cmd={"prefix": "osd blocklist ls", "format": "json-pretty"} : dispatch
Jan 23 19:11:39 compute-0 ceph-mon[75097]: from='client.14728 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14730 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:40 compute-0 systemd[1]: Starting Time & Date Service...
Jan 23 19:11:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Jan 23 19:11:40 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3448128235' entity='client.admin' cmd={"prefix": "osd dump", "format": "json-pretty"} : dispatch
Jan 23 19:11:40 compute-0 systemd[1]: Started Time & Date Service.
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659032622064907 of space, bias 1.0, pg target 0.1997709786619472 quantized to 32 (current 32)
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005188356691283345 of space, bias 4.0, pg target 0.6226028029540014 quantized to 16 (current 16)
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:11:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:40 compute-0 ceph-mon[75097]: from='client.14730 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:40 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3448128235' entity='client.admin' cmd={"prefix": "osd dump", "format": "json-pretty"} : dispatch
Jan 23 19:11:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Jan 23 19:11:41 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/822484600' entity='client.admin' cmd={"prefix": "osd numa-status", "format": "json-pretty"} : dispatch
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14736 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14738 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659032622064907 of space, bias 1.0, pg target 0.1997709786619472 quantized to 32 (current 32)
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005188356691283345 of space, bias 4.0, pg target 0.6226028029540014 quantized to 16 (current 16)
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:11:41 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:11:41 compute-0 ceph-mon[75097]: pgmap v1271: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:41 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/822484600' entity='client.admin' cmd={"prefix": "osd numa-status", "format": "json-pretty"} : dispatch
Jan 23 19:11:41 compute-0 ceph-mon[75097]: from='client.14736 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Jan 23 19:11:42 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3601127652' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 23 19:11:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Jan 23 19:11:42 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2553791143' entity='client.admin' cmd={"prefix": "osd stat", "format": "json-pretty"} : dispatch
Jan 23 19:11:43 compute-0 ceph-mon[75097]: from='client.14738 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:43 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3601127652' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 23 19:11:43 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2553791143' entity='client.admin' cmd={"prefix": "osd stat", "format": "json-pretty"} : dispatch
Jan 23 19:11:43 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14744 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:43 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14746 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:44 compute-0 ceph-mon[75097]: pgmap v1272: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:44 compute-0 ceph-mon[75097]: from='client.14744 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:11:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 23 19:11:44 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1589966194' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 23 19:11:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Jan 23 19:11:44 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2550109570' entity='client.admin' cmd={"prefix": "time-sync-status", "format": "json-pretty"} : dispatch
Jan 23 19:11:45 compute-0 ceph-mon[75097]: from='client.14746 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:11:45 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1589966194' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 23 19:11:45 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2550109570' entity='client.admin' cmd={"prefix": "time-sync-status", "format": "json-pretty"} : dispatch
Jan 23 19:11:46 compute-0 ceph-mon[75097]: pgmap v1273: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:48 compute-0 podman[261118]: 2026-01-23 19:11:48.043625167 +0000 UTC m=+0.109950806 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 23 19:11:48 compute-0 ceph-mon[75097]: pgmap v1274: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:11:50 compute-0 ceph-mon[75097]: pgmap v1275: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:52 compute-0 ceph-mon[75097]: pgmap v1276: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:54 compute-0 ceph-mon[75097]: pgmap v1277: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:11:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:54 compute-0 podman[261144]: 2026-01-23 19:11:54.978577135 +0000 UTC m=+0.052149404 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:11:55.327110) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195515327161, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2496, "num_deletes": 512, "total_data_size": 3520858, "memory_usage": 3595904, "flush_reason": "Manual Compaction"}
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195515352907, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3398426, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26339, "largest_seqno": 28834, "table_properties": {"data_size": 3387674, "index_size": 6286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 28381, "raw_average_key_size": 20, "raw_value_size": 3363072, "raw_average_value_size": 2483, "num_data_blocks": 276, "num_entries": 1354, "num_filter_entries": 1354, "num_deletions": 512, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769195318, "oldest_key_time": 1769195318, "file_creation_time": 1769195515, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 25841 microseconds, and 7432 cpu microseconds.
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:11:55.352952) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3398426 bytes OK
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:11:55.352971) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:11:55.354632) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:11:55.354674) EVENT_LOG_v1 {"time_micros": 1769195515354666, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:11:55.354696) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3509030, prev total WAL file size 3509030, number of live WAL files 2.
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:11:55.355751) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3318KB)], [59(10MB)]
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195515355820, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 13900262, "oldest_snapshot_seqno": -1}
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6036 keys, 9047943 bytes, temperature: kUnknown
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195515420505, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9047943, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9007329, "index_size": 24440, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 151148, "raw_average_key_size": 25, "raw_value_size": 8898860, "raw_average_value_size": 1474, "num_data_blocks": 1001, "num_entries": 6036, "num_filter_entries": 6036, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 1769195515, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:11:55.420756) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9047943 bytes
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:11:55.424014) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.5 rd, 139.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.0 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(6.8) write-amplify(2.7) OK, records in: 7069, records dropped: 1033 output_compression: NoCompression
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:11:55.424048) EVENT_LOG_v1 {"time_micros": 1769195515424026, "job": 32, "event": "compaction_finished", "compaction_time_micros": 64790, "compaction_time_cpu_micros": 22080, "output_level": 6, "num_output_files": 1, "total_output_size": 9047943, "num_input_records": 7069, "num_output_records": 6036, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195515424764, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195515426494, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:11:55.355624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:11:55.426601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:11:55.426607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:11:55.426608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:11:55.426610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:11:55 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:11:55.426611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:11:56 compute-0 ceph-mon[75097]: pgmap v1278: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:11:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/754801668' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:11:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:11:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/754801668' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:11:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/754801668' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:11:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/754801668' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:11:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:59 compute-0 ceph-mon[75097]: pgmap v1279: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:11:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:12:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:12:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:12:00 compute-0 ceph-mon[75097]: pgmap v1280: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:12:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:12:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:12:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:12:01 compute-0 nova_compute[240119]: 2026-01-23 19:12:01.344 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:12:02 compute-0 ceph-mon[75097]: pgmap v1281: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:12:04 compute-0 ceph-mon[75097]: pgmap v1282: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:06 compute-0 ceph-mon[75097]: pgmap v1283: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:07 compute-0 nova_compute[240119]: 2026-01-23 19:12:07.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:12:07 compute-0 nova_compute[240119]: 2026-01-23 19:12:07.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:12:08 compute-0 nova_compute[240119]: 2026-01-23 19:12:08.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:12:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:08 compute-0 ceph-mon[75097]: pgmap v1284: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:12:09 compute-0 nova_compute[240119]: 2026-01-23 19:12:09.347 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:12:09 compute-0 nova_compute[240119]: 2026-01-23 19:12:09.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:12:09 compute-0 nova_compute[240119]: 2026-01-23 19:12:09.375 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:12:09 compute-0 nova_compute[240119]: 2026-01-23 19:12:09.375 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:12:09 compute-0 nova_compute[240119]: 2026-01-23 19:12:09.375 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:12:09 compute-0 nova_compute[240119]: 2026-01-23 19:12:09.376 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:12:09 compute-0 nova_compute[240119]: 2026-01-23 19:12:09.376 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:12:09 compute-0 ceph-mon[75097]: pgmap v1285: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:12:09 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/97474055' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:12:09 compute-0 nova_compute[240119]: 2026-01-23 19:12:09.951 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:12:10 compute-0 nova_compute[240119]: 2026-01-23 19:12:10.108 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:12:10 compute-0 nova_compute[240119]: 2026-01-23 19:12:10.109 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4819MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:12:10 compute-0 nova_compute[240119]: 2026-01-23 19:12:10.110 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:12:10 compute-0 nova_compute[240119]: 2026-01-23 19:12:10.110 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:12:10 compute-0 nova_compute[240119]: 2026-01-23 19:12:10.172 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:12:10 compute-0 nova_compute[240119]: 2026-01-23 19:12:10.172 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:12:10 compute-0 nova_compute[240119]: 2026-01-23 19:12:10.186 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:12:10 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 23 19:12:10 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 19:12:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:12:10 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1066593410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:12:10 compute-0 nova_compute[240119]: 2026-01-23 19:12:10.737 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:12:10 compute-0 nova_compute[240119]: 2026-01-23 19:12:10.742 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:12:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:10 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/97474055' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:12:10 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1066593410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:12:11 compute-0 nova_compute[240119]: 2026-01-23 19:12:11.005 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:12:11 compute-0 nova_compute[240119]: 2026-01-23 19:12:11.007 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:12:11 compute-0 nova_compute[240119]: 2026-01-23 19:12:11.008 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:12:11 compute-0 ceph-mon[75097]: pgmap v1286: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:12 compute-0 sudo[253828]: pam_unix(sudo:session): session closed for user root
Jan 23 19:12:12 compute-0 sshd-session[253827]: Received disconnect from 192.168.122.10 port 44924:11: disconnected by user
Jan 23 19:12:12 compute-0 sshd-session[253827]: Disconnected from user zuul 192.168.122.10 port 44924
Jan 23 19:12:12 compute-0 sshd-session[253824]: pam_unix(sshd:session): session closed for user zuul
Jan 23 19:12:12 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Jan 23 19:12:12 compute-0 systemd[1]: session-51.scope: Consumed 2min 46.961s CPU time, 694.6M memory peak, read 225.7M from disk, written 70.8M to disk.
Jan 23 19:12:12 compute-0 systemd-logind[792]: Session 51 logged out. Waiting for processes to exit.
Jan 23 19:12:12 compute-0 systemd-logind[792]: Removed session 51.
Jan 23 19:12:12 compute-0 sshd-session[261212]: Accepted publickey for zuul from 192.168.122.10 port 53352 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 19:12:12 compute-0 systemd-logind[792]: New session 52 of user zuul.
Jan 23 19:12:12 compute-0 systemd[1]: Started Session 52 of User zuul.
Jan 23 19:12:12 compute-0 sshd-session[261212]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 19:12:12 compute-0 sudo[261216]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2026-01-23-tpmghkb.tar.xz
Jan 23 19:12:12 compute-0 sudo[261216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 19:12:12 compute-0 sudo[261216]: pam_unix(sudo:session): session closed for user root
Jan 23 19:12:12 compute-0 sshd-session[261215]: Received disconnect from 192.168.122.10 port 53352:11: disconnected by user
Jan 23 19:12:12 compute-0 sshd-session[261215]: Disconnected from user zuul 192.168.122.10 port 53352
Jan 23 19:12:12 compute-0 sshd-session[261212]: pam_unix(sshd:session): session closed for user zuul
Jan 23 19:12:12 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Jan 23 19:12:12 compute-0 systemd-logind[792]: Session 52 logged out. Waiting for processes to exit.
Jan 23 19:12:12 compute-0 systemd-logind[792]: Removed session 52.
Jan 23 19:12:12 compute-0 sshd-session[261241]: Accepted publickey for zuul from 192.168.122.10 port 53356 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 19:12:12 compute-0 systemd-logind[792]: New session 53 of user zuul.
Jan 23 19:12:12 compute-0 systemd[1]: Started Session 53 of User zuul.
Jan 23 19:12:12 compute-0 sshd-session[261241]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 19:12:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:12 compute-0 sudo[261245]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Jan 23 19:12:12 compute-0 sudo[261245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 19:12:12 compute-0 sudo[261245]: pam_unix(sudo:session): session closed for user root
Jan 23 19:12:12 compute-0 sshd-session[261244]: Received disconnect from 192.168.122.10 port 53356:11: disconnected by user
Jan 23 19:12:12 compute-0 sshd-session[261244]: Disconnected from user zuul 192.168.122.10 port 53356
Jan 23 19:12:12 compute-0 sshd-session[261241]: pam_unix(sshd:session): session closed for user zuul
Jan 23 19:12:12 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Jan 23 19:12:12 compute-0 systemd-logind[792]: Session 53 logged out. Waiting for processes to exit.
Jan 23 19:12:12 compute-0 systemd-logind[792]: Removed session 53.
Jan 23 19:12:13 compute-0 nova_compute[240119]: 2026-01-23 19:12:13.008 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:12:13 compute-0 nova_compute[240119]: 2026-01-23 19:12:13.009 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:12:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:12:13.593 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:12:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:12:13.593 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:12:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:12:13.593 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:12:13 compute-0 ceph-mon[75097]: pgmap v1287: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:14 compute-0 nova_compute[240119]: 2026-01-23 19:12:14.344 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:12:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:12:14 compute-0 nova_compute[240119]: 2026-01-23 19:12:14.376 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:12:14 compute-0 nova_compute[240119]: 2026-01-23 19:12:14.376 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:12:14 compute-0 nova_compute[240119]: 2026-01-23 19:12:14.377 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:12:14 compute-0 nova_compute[240119]: 2026-01-23 19:12:14.409 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:12:14 compute-0 nova_compute[240119]: 2026-01-23 19:12:14.409 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:12:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:16 compute-0 ceph-mon[75097]: pgmap v1288: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:18 compute-0 ceph-mon[75097]: pgmap v1289: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:19 compute-0 podman[261270]: 2026-01-23 19:12:19.004435633 +0000 UTC m=+0.088752109 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 23 19:12:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:12:20 compute-0 ceph-mon[75097]: pgmap v1290: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:22 compute-0 ceph-mon[75097]: pgmap v1291: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:24 compute-0 ceph-mon[75097]: pgmap v1292: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:12:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:25 compute-0 podman[261295]: 2026-01-23 19:12:25.985350582 +0000 UTC m=+0.065337826 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 23 19:12:26 compute-0 ceph-mon[75097]: pgmap v1293: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:28 compute-0 ceph-mon[75097]: pgmap v1294: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:12:28
Jan 23 19:12:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:12:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:12:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.mgr', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'vms']
Jan 23 19:12:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:12:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:12:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:12:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:12:30 compute-0 ceph-mon[75097]: pgmap v1295: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:12:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:12:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:12:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:12:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:12:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:12:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:12:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:12:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:12:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:12:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:12:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:12:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:12:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:12:32 compute-0 ceph-mon[75097]: pgmap v1296: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:34 compute-0 ceph-mon[75097]: pgmap v1297: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:12:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:36 compute-0 ceph-mon[75097]: pgmap v1298: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:38 compute-0 ceph-mon[75097]: pgmap v1299: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:38 compute-0 sudo[261316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:12:38 compute-0 sudo[261316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:12:38 compute-0 sudo[261316]: pam_unix(sudo:session): session closed for user root
Jan 23 19:12:38 compute-0 sudo[261341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 19:12:38 compute-0 sudo[261341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:12:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:12:39 compute-0 sudo[261341]: pam_unix(sudo:session): session closed for user root
Jan 23 19:12:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:12:39 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:12:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 19:12:39 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:12:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 19:12:39 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:12:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 19:12:39 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:12:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 19:12:39 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:12:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:12:39 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:12:39 compute-0 sudo[261397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:12:39 compute-0 sudo[261397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:12:39 compute-0 sudo[261397]: pam_unix(sudo:session): session closed for user root
Jan 23 19:12:39 compute-0 sudo[261422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 19:12:39 compute-0 sudo[261422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:12:39 compute-0 podman[261458]: 2026-01-23 19:12:39.934395595 +0000 UTC m=+0.048014584 container create aa77d5186eedd2f7560e5387804ed9e72346d1ae603e3b3f2061d25ebe8803f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bartik, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 23 19:12:39 compute-0 systemd[1]: Started libpod-conmon-aa77d5186eedd2f7560e5387804ed9e72346d1ae603e3b3f2061d25ebe8803f1.scope.
Jan 23 19:12:40 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:12:40 compute-0 podman[261458]: 2026-01-23 19:12:39.909463016 +0000 UTC m=+0.023082025 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:12:40 compute-0 podman[261458]: 2026-01-23 19:12:40.045523528 +0000 UTC m=+0.159142557 container init aa77d5186eedd2f7560e5387804ed9e72346d1ae603e3b3f2061d25ebe8803f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 19:12:40 compute-0 podman[261458]: 2026-01-23 19:12:40.052639103 +0000 UTC m=+0.166258112 container start aa77d5186eedd2f7560e5387804ed9e72346d1ae603e3b3f2061d25ebe8803f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bartik, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 23 19:12:40 compute-0 infallible_bartik[261475]: 167 167
Jan 23 19:12:40 compute-0 systemd[1]: libpod-aa77d5186eedd2f7560e5387804ed9e72346d1ae603e3b3f2061d25ebe8803f1.scope: Deactivated successfully.
Jan 23 19:12:40 compute-0 podman[261458]: 2026-01-23 19:12:40.067362362 +0000 UTC m=+0.180981381 container attach aa77d5186eedd2f7560e5387804ed9e72346d1ae603e3b3f2061d25ebe8803f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 23 19:12:40 compute-0 podman[261458]: 2026-01-23 19:12:40.067737531 +0000 UTC m=+0.181356520 container died aa77d5186eedd2f7560e5387804ed9e72346d1ae603e3b3f2061d25ebe8803f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bartik, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:12:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e4fd0202c5f91f144dfa918370dafe39671d8cac9964e85d74553fe9bdce020-merged.mount: Deactivated successfully.
Jan 23 19:12:40 compute-0 podman[261458]: 2026-01-23 19:12:40.119685019 +0000 UTC m=+0.233304008 container remove aa77d5186eedd2f7560e5387804ed9e72346d1ae603e3b3f2061d25ebe8803f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bartik, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:12:40 compute-0 systemd[1]: libpod-conmon-aa77d5186eedd2f7560e5387804ed9e72346d1ae603e3b3f2061d25ebe8803f1.scope: Deactivated successfully.
Jan 23 19:12:40 compute-0 podman[261502]: 2026-01-23 19:12:40.285538049 +0000 UTC m=+0.045673516 container create d0939969dc52cbf166596a742fdf770f2f66b11d6a848b34bc9917ab77537546 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 19:12:40 compute-0 systemd[1]: Started libpod-conmon-d0939969dc52cbf166596a742fdf770f2f66b11d6a848b34bc9917ab77537546.scope.
Jan 23 19:12:40 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263a69d44e4b2f686b30ee2894d9a47c5f04426a969996d5219527d07087a9c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263a69d44e4b2f686b30ee2894d9a47c5f04426a969996d5219527d07087a9c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263a69d44e4b2f686b30ee2894d9a47c5f04426a969996d5219527d07087a9c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263a69d44e4b2f686b30ee2894d9a47c5f04426a969996d5219527d07087a9c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263a69d44e4b2f686b30ee2894d9a47c5f04426a969996d5219527d07087a9c2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 19:12:40 compute-0 podman[261502]: 2026-01-23 19:12:40.261626426 +0000 UTC m=+0.021761913 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:12:40 compute-0 ceph-mon[75097]: pgmap v1300: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:12:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:12:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:12:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:12:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:12:40 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:12:40 compute-0 podman[261502]: 2026-01-23 19:12:40.365995364 +0000 UTC m=+0.126130881 container init d0939969dc52cbf166596a742fdf770f2f66b11d6a848b34bc9917ab77537546 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 19:12:40 compute-0 podman[261502]: 2026-01-23 19:12:40.375319442 +0000 UTC m=+0.135454899 container start d0939969dc52cbf166596a742fdf770f2f66b11d6a848b34bc9917ab77537546 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_keldysh, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 23 19:12:40 compute-0 podman[261502]: 2026-01-23 19:12:40.379713299 +0000 UTC m=+0.139848776 container attach d0939969dc52cbf166596a742fdf770f2f66b11d6a848b34bc9917ab77537546 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_keldysh, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659032622064907 of space, bias 1.0, pg target 0.1997709786619472 quantized to 32 (current 32)
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005188356691283345 of space, bias 4.0, pg target 0.6226028029540014 quantized to 16 (current 16)
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:12:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:40 compute-0 gracious_keldysh[261518]: --> passed data devices: 0 physical, 3 LVM
Jan 23 19:12:40 compute-0 gracious_keldysh[261518]: --> All data devices are unavailable
Jan 23 19:12:40 compute-0 systemd[1]: libpod-d0939969dc52cbf166596a742fdf770f2f66b11d6a848b34bc9917ab77537546.scope: Deactivated successfully.
Jan 23 19:12:40 compute-0 podman[261502]: 2026-01-23 19:12:40.869846998 +0000 UTC m=+0.629982485 container died d0939969dc52cbf166596a742fdf770f2f66b11d6a848b34bc9917ab77537546 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 19:12:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-263a69d44e4b2f686b30ee2894d9a47c5f04426a969996d5219527d07087a9c2-merged.mount: Deactivated successfully.
Jan 23 19:12:40 compute-0 podman[261502]: 2026-01-23 19:12:40.9125215 +0000 UTC m=+0.672656967 container remove d0939969dc52cbf166596a742fdf770f2f66b11d6a848b34bc9917ab77537546 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 23 19:12:40 compute-0 systemd[1]: libpod-conmon-d0939969dc52cbf166596a742fdf770f2f66b11d6a848b34bc9917ab77537546.scope: Deactivated successfully.
Jan 23 19:12:40 compute-0 sudo[261422]: pam_unix(sudo:session): session closed for user root
Jan 23 19:12:41 compute-0 sudo[261550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:12:41 compute-0 sudo[261550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:12:41 compute-0 sudo[261550]: pam_unix(sudo:session): session closed for user root
Jan 23 19:12:41 compute-0 sudo[261575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 19:12:41 compute-0 sudo[261575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:12:41 compute-0 podman[261612]: 2026-01-23 19:12:41.363784729 +0000 UTC m=+0.044613400 container create a578a578dc8e1a5d2c4b9030df6097bd0ef45e404117187a3c78bd1444d8fbef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:12:41 compute-0 systemd[1]: Started libpod-conmon-a578a578dc8e1a5d2c4b9030df6097bd0ef45e404117187a3c78bd1444d8fbef.scope.
Jan 23 19:12:41 compute-0 podman[261612]: 2026-01-23 19:12:41.343949495 +0000 UTC m=+0.024778146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:12:41 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:12:41 compute-0 podman[261612]: 2026-01-23 19:12:41.459858646 +0000 UTC m=+0.140687367 container init a578a578dc8e1a5d2c4b9030df6097bd0ef45e404117187a3c78bd1444d8fbef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 19:12:41 compute-0 podman[261612]: 2026-01-23 19:12:41.467981543 +0000 UTC m=+0.148810224 container start a578a578dc8e1a5d2c4b9030df6097bd0ef45e404117187a3c78bd1444d8fbef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_perlman, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 23 19:12:41 compute-0 podman[261612]: 2026-01-23 19:12:41.472858753 +0000 UTC m=+0.153687434 container attach a578a578dc8e1a5d2c4b9030df6097bd0ef45e404117187a3c78bd1444d8fbef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_perlman, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 19:12:41 compute-0 stoic_perlman[261628]: 167 167
Jan 23 19:12:41 compute-0 systemd[1]: libpod-a578a578dc8e1a5d2c4b9030df6097bd0ef45e404117187a3c78bd1444d8fbef.scope: Deactivated successfully.
Jan 23 19:12:41 compute-0 conmon[261628]: conmon a578a578dc8e1a5d2c4b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a578a578dc8e1a5d2c4b9030df6097bd0ef45e404117187a3c78bd1444d8fbef.scope/container/memory.events
Jan 23 19:12:41 compute-0 podman[261612]: 2026-01-23 19:12:41.476619735 +0000 UTC m=+0.157448456 container died a578a578dc8e1a5d2c4b9030df6097bd0ef45e404117187a3c78bd1444d8fbef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_perlman, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e72d561f0c3eea9d5101261cade4cd9bc019b8100fd82be70221b4296e3f37c-merged.mount: Deactivated successfully.
Jan 23 19:12:41 compute-0 podman[261612]: 2026-01-23 19:12:41.516041678 +0000 UTC m=+0.196870349 container remove a578a578dc8e1a5d2c4b9030df6097bd0ef45e404117187a3c78bd1444d8fbef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:12:41 compute-0 systemd[1]: libpod-conmon-a578a578dc8e1a5d2c4b9030df6097bd0ef45e404117187a3c78bd1444d8fbef.scope: Deactivated successfully.
Jan 23 19:12:41 compute-0 podman[261653]: 2026-01-23 19:12:41.706825916 +0000 UTC m=+0.042716364 container create 92b0a6a8fe6a08e13934109ef9e69c0499f738b20f7d5b66f89c343a52880910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 23 19:12:41 compute-0 systemd[1]: Started libpod-conmon-92b0a6a8fe6a08e13934109ef9e69c0499f738b20f7d5b66f89c343a52880910.scope.
Jan 23 19:12:41 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3778db9a7d706830b0fb973f8885f0f32fedc78648dc5b41ff6fbda9e0f822e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3778db9a7d706830b0fb973f8885f0f32fedc78648dc5b41ff6fbda9e0f822e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3778db9a7d706830b0fb973f8885f0f32fedc78648dc5b41ff6fbda9e0f822e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3778db9a7d706830b0fb973f8885f0f32fedc78648dc5b41ff6fbda9e0f822e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:12:41 compute-0 podman[261653]: 2026-01-23 19:12:41.689065692 +0000 UTC m=+0.024956170 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:12:41 compute-0 podman[261653]: 2026-01-23 19:12:41.785938188 +0000 UTC m=+0.121828636 container init 92b0a6a8fe6a08e13934109ef9e69c0499f738b20f7d5b66f89c343a52880910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_brown, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:12:41 compute-0 podman[261653]: 2026-01-23 19:12:41.793574465 +0000 UTC m=+0.129464923 container start 92b0a6a8fe6a08e13934109ef9e69c0499f738b20f7d5b66f89c343a52880910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_brown, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 19:12:41 compute-0 podman[261653]: 2026-01-23 19:12:41.797192173 +0000 UTC m=+0.133082641 container attach 92b0a6a8fe6a08e13934109ef9e69c0499f738b20f7d5b66f89c343a52880910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_brown, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:12:42 compute-0 magical_brown[261669]: {
Jan 23 19:12:42 compute-0 magical_brown[261669]:     "0": [
Jan 23 19:12:42 compute-0 magical_brown[261669]:         {
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "devices": [
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "/dev/loop3"
Jan 23 19:12:42 compute-0 magical_brown[261669]:             ],
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "lv_name": "ceph_lv0",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "lv_size": "21470642176",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "name": "ceph_lv0",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "tags": {
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.cluster_name": "ceph",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.crush_device_class": "",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.encrypted": "0",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.objectstore": "bluestore",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.osd_id": "0",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.type": "block",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.vdo": "0",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.with_tpm": "0"
Jan 23 19:12:42 compute-0 magical_brown[261669]:             },
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "type": "block",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "vg_name": "ceph_vg0"
Jan 23 19:12:42 compute-0 magical_brown[261669]:         }
Jan 23 19:12:42 compute-0 magical_brown[261669]:     ],
Jan 23 19:12:42 compute-0 magical_brown[261669]:     "1": [
Jan 23 19:12:42 compute-0 magical_brown[261669]:         {
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "devices": [
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "/dev/loop4"
Jan 23 19:12:42 compute-0 magical_brown[261669]:             ],
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "lv_name": "ceph_lv1",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "lv_size": "21470642176",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "name": "ceph_lv1",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "tags": {
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.cluster_name": "ceph",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.crush_device_class": "",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.encrypted": "0",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.objectstore": "bluestore",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.osd_id": "1",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.type": "block",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.vdo": "0",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.with_tpm": "0"
Jan 23 19:12:42 compute-0 magical_brown[261669]:             },
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "type": "block",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "vg_name": "ceph_vg1"
Jan 23 19:12:42 compute-0 magical_brown[261669]:         }
Jan 23 19:12:42 compute-0 magical_brown[261669]:     ],
Jan 23 19:12:42 compute-0 magical_brown[261669]:     "2": [
Jan 23 19:12:42 compute-0 magical_brown[261669]:         {
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "devices": [
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "/dev/loop5"
Jan 23 19:12:42 compute-0 magical_brown[261669]:             ],
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "lv_name": "ceph_lv2",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "lv_size": "21470642176",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "name": "ceph_lv2",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "tags": {
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.cluster_name": "ceph",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.crush_device_class": "",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.encrypted": "0",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.objectstore": "bluestore",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.osd_id": "2",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.type": "block",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.vdo": "0",
Jan 23 19:12:42 compute-0 magical_brown[261669]:                 "ceph.with_tpm": "0"
Jan 23 19:12:42 compute-0 magical_brown[261669]:             },
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "type": "block",
Jan 23 19:12:42 compute-0 magical_brown[261669]:             "vg_name": "ceph_vg2"
Jan 23 19:12:42 compute-0 magical_brown[261669]:         }
Jan 23 19:12:42 compute-0 magical_brown[261669]:     ]
Jan 23 19:12:42 compute-0 magical_brown[261669]: }
Jan 23 19:12:42 compute-0 systemd[1]: libpod-92b0a6a8fe6a08e13934109ef9e69c0499f738b20f7d5b66f89c343a52880910.scope: Deactivated successfully.
Jan 23 19:12:42 compute-0 podman[261653]: 2026-01-23 19:12:42.149987839 +0000 UTC m=+0.485878287 container died 92b0a6a8fe6a08e13934109ef9e69c0499f738b20f7d5b66f89c343a52880910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_brown, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Jan 23 19:12:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3778db9a7d706830b0fb973f8885f0f32fedc78648dc5b41ff6fbda9e0f822e-merged.mount: Deactivated successfully.
Jan 23 19:12:42 compute-0 podman[261653]: 2026-01-23 19:12:42.191047161 +0000 UTC m=+0.526937609 container remove 92b0a6a8fe6a08e13934109ef9e69c0499f738b20f7d5b66f89c343a52880910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_brown, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 19:12:42 compute-0 systemd[1]: libpod-conmon-92b0a6a8fe6a08e13934109ef9e69c0499f738b20f7d5b66f89c343a52880910.scope: Deactivated successfully.
Jan 23 19:12:42 compute-0 sudo[261575]: pam_unix(sudo:session): session closed for user root
Jan 23 19:12:42 compute-0 sudo[261689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:12:42 compute-0 sudo[261689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:12:42 compute-0 sudo[261689]: pam_unix(sudo:session): session closed for user root
Jan 23 19:12:42 compute-0 sudo[261714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 19:12:42 compute-0 sudo[261714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:12:42 compute-0 ceph-mon[75097]: pgmap v1301: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:42 compute-0 podman[261751]: 2026-01-23 19:12:42.668638243 +0000 UTC m=+0.041353541 container create 1969eca6c500964529b55f3cc4806643373d31633c0ae4c8b86c0e3a92af1da4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_cori, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:12:42 compute-0 systemd[1]: Started libpod-conmon-1969eca6c500964529b55f3cc4806643373d31633c0ae4c8b86c0e3a92af1da4.scope.
Jan 23 19:12:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:12:42 compute-0 podman[261751]: 2026-01-23 19:12:42.739075093 +0000 UTC m=+0.111790401 container init 1969eca6c500964529b55f3cc4806643373d31633c0ae4c8b86c0e3a92af1da4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 23 19:12:42 compute-0 podman[261751]: 2026-01-23 19:12:42.744767622 +0000 UTC m=+0.117482920 container start 1969eca6c500964529b55f3cc4806643373d31633c0ae4c8b86c0e3a92af1da4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_cori, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 19:12:42 compute-0 podman[261751]: 2026-01-23 19:12:42.651093754 +0000 UTC m=+0.023809072 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:12:42 compute-0 podman[261751]: 2026-01-23 19:12:42.748300789 +0000 UTC m=+0.121016087 container attach 1969eca6c500964529b55f3cc4806643373d31633c0ae4c8b86c0e3a92af1da4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_cori, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 23 19:12:42 compute-0 laughing_cori[261768]: 167 167
Jan 23 19:12:42 compute-0 systemd[1]: libpod-1969eca6c500964529b55f3cc4806643373d31633c0ae4c8b86c0e3a92af1da4.scope: Deactivated successfully.
Jan 23 19:12:42 compute-0 podman[261751]: 2026-01-23 19:12:42.750229785 +0000 UTC m=+0.122945083 container died 1969eca6c500964529b55f3cc4806643373d31633c0ae4c8b86c0e3a92af1da4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_cori, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 19:12:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-344aa8ae0d727484fbf31043d701d43ab7bd4447bf0b4689581eca387b907fbe-merged.mount: Deactivated successfully.
Jan 23 19:12:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:42 compute-0 podman[261751]: 2026-01-23 19:12:42.785187719 +0000 UTC m=+0.157903027 container remove 1969eca6c500964529b55f3cc4806643373d31633c0ae4c8b86c0e3a92af1da4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_cori, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 23 19:12:42 compute-0 systemd[1]: libpod-conmon-1969eca6c500964529b55f3cc4806643373d31633c0ae4c8b86c0e3a92af1da4.scope: Deactivated successfully.
Jan 23 19:12:42 compute-0 podman[261791]: 2026-01-23 19:12:42.970149196 +0000 UTC m=+0.071662221 container create 1fe87cb07b9b7eaa8b961789940762a9b64585ff1697fc96d8a83a5e319f1a21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swanson, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 19:12:43 compute-0 systemd[1]: Started libpod-conmon-1fe87cb07b9b7eaa8b961789940762a9b64585ff1697fc96d8a83a5e319f1a21.scope.
Jan 23 19:12:43 compute-0 podman[261791]: 2026-01-23 19:12:42.919228103 +0000 UTC m=+0.020741138 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:12:43 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f83876041daa1ee6b0e8a6f91b9f4251c59c7db8135f71841fba08255f75ec12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f83876041daa1ee6b0e8a6f91b9f4251c59c7db8135f71841fba08255f75ec12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f83876041daa1ee6b0e8a6f91b9f4251c59c7db8135f71841fba08255f75ec12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f83876041daa1ee6b0e8a6f91b9f4251c59c7db8135f71841fba08255f75ec12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:12:43 compute-0 podman[261791]: 2026-01-23 19:12:43.056424753 +0000 UTC m=+0.157937788 container init 1fe87cb07b9b7eaa8b961789940762a9b64585ff1697fc96d8a83a5e319f1a21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swanson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 23 19:12:43 compute-0 podman[261791]: 2026-01-23 19:12:43.064668844 +0000 UTC m=+0.166181859 container start 1fe87cb07b9b7eaa8b961789940762a9b64585ff1697fc96d8a83a5e319f1a21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swanson, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:12:43 compute-0 podman[261791]: 2026-01-23 19:12:43.067856112 +0000 UTC m=+0.169369127 container attach 1fe87cb07b9b7eaa8b961789940762a9b64585ff1697fc96d8a83a5e319f1a21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swanson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:12:43 compute-0 lvm[261885]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:12:43 compute-0 lvm[261886]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:12:43 compute-0 lvm[261885]: VG ceph_vg0 finished
Jan 23 19:12:43 compute-0 lvm[261886]: VG ceph_vg1 finished
Jan 23 19:12:43 compute-0 lvm[261888]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:12:43 compute-0 lvm[261888]: VG ceph_vg2 finished
Jan 23 19:12:43 compute-0 musing_swanson[261807]: {}
Jan 23 19:12:43 compute-0 systemd[1]: libpod-1fe87cb07b9b7eaa8b961789940762a9b64585ff1697fc96d8a83a5e319f1a21.scope: Deactivated successfully.
Jan 23 19:12:43 compute-0 systemd[1]: libpod-1fe87cb07b9b7eaa8b961789940762a9b64585ff1697fc96d8a83a5e319f1a21.scope: Consumed 1.219s CPU time.
Jan 23 19:12:43 compute-0 podman[261791]: 2026-01-23 19:12:43.825195576 +0000 UTC m=+0.926708611 container died 1fe87cb07b9b7eaa8b961789940762a9b64585ff1697fc96d8a83a5e319f1a21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swanson, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:12:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f83876041daa1ee6b0e8a6f91b9f4251c59c7db8135f71841fba08255f75ec12-merged.mount: Deactivated successfully.
Jan 23 19:12:43 compute-0 podman[261791]: 2026-01-23 19:12:43.867043558 +0000 UTC m=+0.968556573 container remove 1fe87cb07b9b7eaa8b961789940762a9b64585ff1697fc96d8a83a5e319f1a21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swanson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:12:43 compute-0 systemd[1]: libpod-conmon-1fe87cb07b9b7eaa8b961789940762a9b64585ff1697fc96d8a83a5e319f1a21.scope: Deactivated successfully.
Jan 23 19:12:43 compute-0 sudo[261714]: pam_unix(sudo:session): session closed for user root
Jan 23 19:12:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:12:43 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:12:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:12:43 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:12:44 compute-0 sudo[261903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 19:12:44 compute-0 sudo[261903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:12:44 compute-0 sudo[261903]: pam_unix(sudo:session): session closed for user root
Jan 23 19:12:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:12:44 compute-0 ceph-mon[75097]: pgmap v1302: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:44 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:12:44 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:12:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:46 compute-0 ceph-mon[75097]: pgmap v1303: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:48 compute-0 ceph-mon[75097]: pgmap v1304: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:12:50 compute-0 podman[261928]: 2026-01-23 19:12:50.005368913 +0000 UTC m=+0.088396040 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 19:12:50 compute-0 ceph-mon[75097]: pgmap v1305: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:52 compute-0 ceph-mon[75097]: pgmap v1306: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:12:54 compute-0 ceph-mon[75097]: pgmap v1307: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:55 compute-0 ceph-mon[75097]: pgmap v1308: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:12:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3062996430' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:12:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:12:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3062996430' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:12:56 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/3062996430' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:12:56 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/3062996430' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:12:56 compute-0 podman[261954]: 2026-01-23 19:12:56.983418613 +0000 UTC m=+0.055692831 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 19:12:58 compute-0 ceph-mon[75097]: pgmap v1309: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:12:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:13:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:13:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:13:00 compute-0 ceph-mon[75097]: pgmap v1310: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:13:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:13:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:13:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:13:02 compute-0 ceph-mon[75097]: pgmap v1311: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:03 compute-0 nova_compute[240119]: 2026-01-23 19:13:03.408 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:13:04 compute-0 ceph-mon[75097]: pgmap v1312: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:13:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:07 compute-0 ceph-mon[75097]: pgmap v1313: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:08 compute-0 nova_compute[240119]: 2026-01-23 19:13:08.347 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:13:08 compute-0 nova_compute[240119]: 2026-01-23 19:13:08.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:13:08 compute-0 ceph-mon[75097]: pgmap v1314: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:09 compute-0 nova_compute[240119]: 2026-01-23 19:13:09.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:13:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:13:10 compute-0 ceph-mon[75097]: pgmap v1315: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:10 compute-0 nova_compute[240119]: 2026-01-23 19:13:10.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:13:10 compute-0 nova_compute[240119]: 2026-01-23 19:13:10.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:13:10 compute-0 nova_compute[240119]: 2026-01-23 19:13:10.376 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:13:10 compute-0 nova_compute[240119]: 2026-01-23 19:13:10.377 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:13:10 compute-0 nova_compute[240119]: 2026-01-23 19:13:10.377 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:13:10 compute-0 nova_compute[240119]: 2026-01-23 19:13:10.378 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:13:10 compute-0 nova_compute[240119]: 2026-01-23 19:13:10.378 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:13:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:13:10 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3683448963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:13:10 compute-0 nova_compute[240119]: 2026-01-23 19:13:10.924 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:13:11 compute-0 nova_compute[240119]: 2026-01-23 19:13:11.136 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:13:11 compute-0 nova_compute[240119]: 2026-01-23 19:13:11.137 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4932MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:13:11 compute-0 nova_compute[240119]: 2026-01-23 19:13:11.137 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:13:11 compute-0 nova_compute[240119]: 2026-01-23 19:13:11.137 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:13:11 compute-0 nova_compute[240119]: 2026-01-23 19:13:11.201 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:13:11 compute-0 nova_compute[240119]: 2026-01-23 19:13:11.202 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:13:11 compute-0 nova_compute[240119]: 2026-01-23 19:13:11.225 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:13:11 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3683448963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:13:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:13:11 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/383406059' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:13:11 compute-0 nova_compute[240119]: 2026-01-23 19:13:11.752 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:13:11 compute-0 nova_compute[240119]: 2026-01-23 19:13:11.757 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:13:11 compute-0 nova_compute[240119]: 2026-01-23 19:13:11.774 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:13:11 compute-0 nova_compute[240119]: 2026-01-23 19:13:11.776 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:13:11 compute-0 nova_compute[240119]: 2026-01-23 19:13:11.777 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:13:12 compute-0 ceph-mon[75097]: pgmap v1316: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:12 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/383406059' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:13:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:13:13.594 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:13:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:13:13.595 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:13:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:13:13.595 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:13:13 compute-0 nova_compute[240119]: 2026-01-23 19:13:13.778 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:13:13 compute-0 nova_compute[240119]: 2026-01-23 19:13:13.779 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:13:13 compute-0 ceph-mon[75097]: pgmap v1317: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:13:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:15 compute-0 nova_compute[240119]: 2026-01-23 19:13:15.350 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:13:15 compute-0 nova_compute[240119]: 2026-01-23 19:13:15.350 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:13:15 compute-0 nova_compute[240119]: 2026-01-23 19:13:15.350 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:13:15 compute-0 nova_compute[240119]: 2026-01-23 19:13:15.376 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:13:15 compute-0 ceph-mon[75097]: pgmap v1318: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:16 compute-0 nova_compute[240119]: 2026-01-23 19:13:16.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:13:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:17 compute-0 ceph-mon[75097]: pgmap v1319: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:13:19 compute-0 ceph-mon[75097]: pgmap v1320: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:20 compute-0 podman[262017]: 2026-01-23 19:13:20.992432267 +0000 UTC m=+0.080179609 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 23 19:13:21 compute-0 ceph-mon[75097]: pgmap v1321: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:24 compute-0 ceph-mon[75097]: pgmap v1322: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:13:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:26 compute-0 ceph-mon[75097]: pgmap v1323: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:27 compute-0 podman[262044]: 2026-01-23 19:13:27.965469787 +0000 UTC m=+0.050595438 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 23 19:13:28 compute-0 ceph-mon[75097]: pgmap v1324: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:13:28
Jan 23 19:13:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:13:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:13:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'volumes', 'backups', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.control']
Jan 23 19:13:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:13:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:13:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:13:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:13:30 compute-0 ceph-mon[75097]: pgmap v1325: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:13:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:13:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:13:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:13:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:13:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:13:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:13:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:13:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:13:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:13:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:13:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:13:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:13:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:13:32 compute-0 ceph-mon[75097]: pgmap v1326: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:13:34 compute-0 ceph-mon[75097]: pgmap v1327: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:36 compute-0 ceph-mon[75097]: pgmap v1328: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:38 compute-0 ceph-mon[75097]: pgmap v1329: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659032622064907 of space, bias 1.0, pg target 0.1997709786619472 quantized to 32 (current 32)
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005188356691283345 of space, bias 4.0, pg target 0.6226028029540014 quantized to 16 (current 16)
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:13:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:40 compute-0 ceph-mon[75097]: pgmap v1330: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:42 compute-0 ceph-mon[75097]: pgmap v1331: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:44 compute-0 sudo[262063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:13:44 compute-0 sudo[262063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:13:44 compute-0 sudo[262063]: pam_unix(sudo:session): session closed for user root
Jan 23 19:13:44 compute-0 sudo[262088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 19:13:44 compute-0 sudo[262088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:13:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:13:44 compute-0 ceph-mon[75097]: pgmap v1332: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:44 compute-0 sudo[262088]: pam_unix(sudo:session): session closed for user root
Jan 23 19:13:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:13:44 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:13:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 19:13:44 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:13:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 19:13:44 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:13:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 19:13:44 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:13:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 19:13:44 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:13:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:13:44 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:13:45 compute-0 sudo[262144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:13:45 compute-0 sudo[262144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:13:45 compute-0 sudo[262144]: pam_unix(sudo:session): session closed for user root
Jan 23 19:13:45 compute-0 sudo[262169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 19:13:45 compute-0 sudo[262169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:13:45 compute-0 podman[262206]: 2026-01-23 19:13:45.37500031 +0000 UTC m=+0.021854065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:13:45 compute-0 podman[262206]: 2026-01-23 19:13:45.575542717 +0000 UTC m=+0.222396452 container create 53414d1fb93d7d204c4b881480228ee4856316c99b97fa3328d7cfa73e0228b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_buck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 23 19:13:45 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:13:45 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:13:45 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:13:45 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:13:45 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:13:45 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:13:45 compute-0 systemd[1]: Started libpod-conmon-53414d1fb93d7d204c4b881480228ee4856316c99b97fa3328d7cfa73e0228b8.scope.
Jan 23 19:13:45 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:13:46 compute-0 podman[262206]: 2026-01-23 19:13:46.163777291 +0000 UTC m=+0.810631056 container init 53414d1fb93d7d204c4b881480228ee4856316c99b97fa3328d7cfa73e0228b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 19:13:46 compute-0 podman[262206]: 2026-01-23 19:13:46.172523704 +0000 UTC m=+0.819377439 container start 53414d1fb93d7d204c4b881480228ee4856316c99b97fa3328d7cfa73e0228b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_buck, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 23 19:13:46 compute-0 magical_buck[262222]: 167 167
Jan 23 19:13:46 compute-0 systemd[1]: libpod-53414d1fb93d7d204c4b881480228ee4856316c99b97fa3328d7cfa73e0228b8.scope: Deactivated successfully.
Jan 23 19:13:46 compute-0 conmon[262222]: conmon 53414d1fb93d7d204c4b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53414d1fb93d7d204c4b881480228ee4856316c99b97fa3328d7cfa73e0228b8.scope/container/memory.events
Jan 23 19:13:46 compute-0 podman[262206]: 2026-01-23 19:13:46.230918191 +0000 UTC m=+0.877771946 container attach 53414d1fb93d7d204c4b881480228ee4856316c99b97fa3328d7cfa73e0228b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_buck, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:13:46 compute-0 podman[262206]: 2026-01-23 19:13:46.23210169 +0000 UTC m=+0.878955435 container died 53414d1fb93d7d204c4b881480228ee4856316c99b97fa3328d7cfa73e0228b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_buck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:13:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e2e363d70779ef9e2e8e804ab0d18be894755df4d6a90c3ec94fa975abd7c33-merged.mount: Deactivated successfully.
Jan 23 19:13:46 compute-0 ceph-mon[75097]: pgmap v1333: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:46 compute-0 podman[262206]: 2026-01-23 19:13:46.659381193 +0000 UTC m=+1.306234968 container remove 53414d1fb93d7d204c4b881480228ee4856316c99b97fa3328d7cfa73e0228b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:13:46 compute-0 systemd[1]: libpod-conmon-53414d1fb93d7d204c4b881480228ee4856316c99b97fa3328d7cfa73e0228b8.scope: Deactivated successfully.
Jan 23 19:13:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:46 compute-0 podman[262248]: 2026-01-23 19:13:46.910909476 +0000 UTC m=+0.107847354 container create 9c0f87bc867b9e401e10cd0101b68b9c69b1e41f74a4edbf248d2a061114e8cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 19:13:46 compute-0 podman[262248]: 2026-01-23 19:13:46.826907345 +0000 UTC m=+0.023845243 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:13:46 compute-0 systemd[1]: Started libpod-conmon-9c0f87bc867b9e401e10cd0101b68b9c69b1e41f74a4edbf248d2a061114e8cd.scope.
Jan 23 19:13:46 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eab9755d44c31f09ad9a7debf7a8154e5bca21d71e3f10790abf50073d9c743/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eab9755d44c31f09ad9a7debf7a8154e5bca21d71e3f10790abf50073d9c743/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eab9755d44c31f09ad9a7debf7a8154e5bca21d71e3f10790abf50073d9c743/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eab9755d44c31f09ad9a7debf7a8154e5bca21d71e3f10790abf50073d9c743/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eab9755d44c31f09ad9a7debf7a8154e5bca21d71e3f10790abf50073d9c743/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 19:13:47 compute-0 podman[262248]: 2026-01-23 19:13:47.347700652 +0000 UTC m=+0.544638580 container init 9c0f87bc867b9e401e10cd0101b68b9c69b1e41f74a4edbf248d2a061114e8cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lehmann, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:13:47 compute-0 podman[262248]: 2026-01-23 19:13:47.360165127 +0000 UTC m=+0.557102995 container start 9c0f87bc867b9e401e10cd0101b68b9c69b1e41f74a4edbf248d2a061114e8cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lehmann, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 19:13:47 compute-0 podman[262248]: 2026-01-23 19:13:47.398599055 +0000 UTC m=+0.595536933 container attach 9c0f87bc867b9e401e10cd0101b68b9c69b1e41f74a4edbf248d2a061114e8cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lehmann, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 23 19:13:47 compute-0 magical_lehmann[262265]: --> passed data devices: 0 physical, 3 LVM
Jan 23 19:13:47 compute-0 magical_lehmann[262265]: --> All data devices are unavailable
Jan 23 19:13:47 compute-0 systemd[1]: libpod-9c0f87bc867b9e401e10cd0101b68b9c69b1e41f74a4edbf248d2a061114e8cd.scope: Deactivated successfully.
Jan 23 19:13:47 compute-0 podman[262248]: 2026-01-23 19:13:47.855379499 +0000 UTC m=+1.052317437 container died 9c0f87bc867b9e401e10cd0101b68b9c69b1e41f74a4edbf248d2a061114e8cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 23 19:13:47 compute-0 ceph-mon[75097]: pgmap v1334: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-2eab9755d44c31f09ad9a7debf7a8154e5bca21d71e3f10790abf50073d9c743-merged.mount: Deactivated successfully.
Jan 23 19:13:47 compute-0 podman[262248]: 2026-01-23 19:13:47.960629669 +0000 UTC m=+1.157567567 container remove 9c0f87bc867b9e401e10cd0101b68b9c69b1e41f74a4edbf248d2a061114e8cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:13:47 compute-0 systemd[1]: libpod-conmon-9c0f87bc867b9e401e10cd0101b68b9c69b1e41f74a4edbf248d2a061114e8cd.scope: Deactivated successfully.
Jan 23 19:13:48 compute-0 sudo[262169]: pam_unix(sudo:session): session closed for user root
Jan 23 19:13:48 compute-0 sudo[262299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:13:48 compute-0 sudo[262299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:13:48 compute-0 sudo[262299]: pam_unix(sudo:session): session closed for user root
Jan 23 19:13:48 compute-0 sudo[262324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 19:13:48 compute-0 sudo[262324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:13:48 compute-0 podman[262359]: 2026-01-23 19:13:48.457693367 +0000 UTC m=+0.048529585 container create bb4fd596a1cf6da1631e7e7dd8f0b59c55ec483ca15476e62341e290c3f6f427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sutherland, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 23 19:13:48 compute-0 systemd[1]: Started libpod-conmon-bb4fd596a1cf6da1631e7e7dd8f0b59c55ec483ca15476e62341e290c3f6f427.scope.
Jan 23 19:13:48 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:13:48 compute-0 podman[262359]: 2026-01-23 19:13:48.437388822 +0000 UTC m=+0.028225080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:13:48 compute-0 podman[262359]: 2026-01-23 19:13:48.54377325 +0000 UTC m=+0.134609578 container init bb4fd596a1cf6da1631e7e7dd8f0b59c55ec483ca15476e62341e290c3f6f427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sutherland, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 19:13:48 compute-0 podman[262359]: 2026-01-23 19:13:48.550815542 +0000 UTC m=+0.141651760 container start bb4fd596a1cf6da1631e7e7dd8f0b59c55ec483ca15476e62341e290c3f6f427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:13:48 compute-0 podman[262359]: 2026-01-23 19:13:48.554972993 +0000 UTC m=+0.145809241 container attach bb4fd596a1cf6da1631e7e7dd8f0b59c55ec483ca15476e62341e290c3f6f427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:13:48 compute-0 tender_sutherland[262375]: 167 167
Jan 23 19:13:48 compute-0 systemd[1]: libpod-bb4fd596a1cf6da1631e7e7dd8f0b59c55ec483ca15476e62341e290c3f6f427.scope: Deactivated successfully.
Jan 23 19:13:48 compute-0 conmon[262375]: conmon bb4fd596a1cf6da1631e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb4fd596a1cf6da1631e7e7dd8f0b59c55ec483ca15476e62341e290c3f6f427.scope/container/memory.events
Jan 23 19:13:48 compute-0 podman[262359]: 2026-01-23 19:13:48.558254493 +0000 UTC m=+0.149090711 container died bb4fd596a1cf6da1631e7e7dd8f0b59c55ec483ca15476e62341e290c3f6f427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sutherland, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1fd98ad57b59679e0bf8f6da3ef36983efca391d28179a69106191c6b977688-merged.mount: Deactivated successfully.
Jan 23 19:13:48 compute-0 podman[262359]: 2026-01-23 19:13:48.617244993 +0000 UTC m=+0.208081211 container remove bb4fd596a1cf6da1631e7e7dd8f0b59c55ec483ca15476e62341e290c3f6f427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 19:13:48 compute-0 systemd[1]: libpod-conmon-bb4fd596a1cf6da1631e7e7dd8f0b59c55ec483ca15476e62341e290c3f6f427.scope: Deactivated successfully.
Jan 23 19:13:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:48 compute-0 podman[262399]: 2026-01-23 19:13:48.748097459 +0000 UTC m=+0.021243559 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:13:49 compute-0 podman[262399]: 2026-01-23 19:13:49.329974208 +0000 UTC m=+0.603120268 container create 54d11384cda5ef5d8d7d424df6a52f3d0c394ac634538448d3fa5347852e21ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:13:49 compute-0 systemd[1]: Started libpod-conmon-54d11384cda5ef5d8d7d424df6a52f3d0c394ac634538448d3fa5347852e21ca.scope.
Jan 23 19:13:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:13:49 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87577e4a3edbf9dc3d6221c42a76ee617fe4898cd44b9e1503a3667e6331f3b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87577e4a3edbf9dc3d6221c42a76ee617fe4898cd44b9e1503a3667e6331f3b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87577e4a3edbf9dc3d6221c42a76ee617fe4898cd44b9e1503a3667e6331f3b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87577e4a3edbf9dc3d6221c42a76ee617fe4898cd44b9e1503a3667e6331f3b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:13:49 compute-0 podman[262399]: 2026-01-23 19:13:49.412429092 +0000 UTC m=+0.685575172 container init 54d11384cda5ef5d8d7d424df6a52f3d0c394ac634538448d3fa5347852e21ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:13:49 compute-0 podman[262399]: 2026-01-23 19:13:49.42010233 +0000 UTC m=+0.693248390 container start 54d11384cda5ef5d8d7d424df6a52f3d0c394ac634538448d3fa5347852e21ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_beaver, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:13:49 compute-0 podman[262399]: 2026-01-23 19:13:49.424105027 +0000 UTC m=+0.697251107 container attach 54d11384cda5ef5d8d7d424df6a52f3d0c394ac634538448d3fa5347852e21ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 19:13:49 compute-0 gifted_beaver[262416]: {
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:     "0": [
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:         {
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "devices": [
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "/dev/loop3"
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             ],
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "lv_name": "ceph_lv0",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "lv_size": "21470642176",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "name": "ceph_lv0",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "tags": {
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.cluster_name": "ceph",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.crush_device_class": "",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.encrypted": "0",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.objectstore": "bluestore",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.osd_id": "0",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.type": "block",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.vdo": "0",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.with_tpm": "0"
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             },
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "type": "block",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "vg_name": "ceph_vg0"
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:         }
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:     ],
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:     "1": [
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:         {
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "devices": [
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "/dev/loop4"
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             ],
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "lv_name": "ceph_lv1",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "lv_size": "21470642176",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "name": "ceph_lv1",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "tags": {
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.cluster_name": "ceph",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.crush_device_class": "",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.encrypted": "0",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.objectstore": "bluestore",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.osd_id": "1",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.type": "block",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.vdo": "0",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.with_tpm": "0"
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             },
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "type": "block",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "vg_name": "ceph_vg1"
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:         }
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:     ],
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:     "2": [
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:         {
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "devices": [
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "/dev/loop5"
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             ],
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "lv_name": "ceph_lv2",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "lv_size": "21470642176",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "name": "ceph_lv2",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "tags": {
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.cluster_name": "ceph",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.crush_device_class": "",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.encrypted": "0",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.objectstore": "bluestore",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.osd_id": "2",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.type": "block",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.vdo": "0",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:                 "ceph.with_tpm": "0"
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             },
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "type": "block",
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:             "vg_name": "ceph_vg2"
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:         }
Jan 23 19:13:49 compute-0 gifted_beaver[262416]:     ]
Jan 23 19:13:49 compute-0 gifted_beaver[262416]: }
Jan 23 19:13:49 compute-0 systemd[1]: libpod-54d11384cda5ef5d8d7d424df6a52f3d0c394ac634538448d3fa5347852e21ca.scope: Deactivated successfully.
Jan 23 19:13:49 compute-0 podman[262399]: 2026-01-23 19:13:49.709933267 +0000 UTC m=+0.983079347 container died 54d11384cda5ef5d8d7d424df6a52f3d0c394ac634538448d3fa5347852e21ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_beaver, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:13:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-87577e4a3edbf9dc3d6221c42a76ee617fe4898cd44b9e1503a3667e6331f3b2-merged.mount: Deactivated successfully.
Jan 23 19:13:49 compute-0 podman[262399]: 2026-01-23 19:13:49.761234039 +0000 UTC m=+1.034380109 container remove 54d11384cda5ef5d8d7d424df6a52f3d0c394ac634538448d3fa5347852e21ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_beaver, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 23 19:13:49 compute-0 systemd[1]: libpod-conmon-54d11384cda5ef5d8d7d424df6a52f3d0c394ac634538448d3fa5347852e21ca.scope: Deactivated successfully.
Jan 23 19:13:49 compute-0 sudo[262324]: pam_unix(sudo:session): session closed for user root
Jan 23 19:13:49 compute-0 sudo[262437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:13:49 compute-0 sudo[262437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:13:49 compute-0 sudo[262437]: pam_unix(sudo:session): session closed for user root
Jan 23 19:13:49 compute-0 sudo[262462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 19:13:49 compute-0 sudo[262462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:13:50 compute-0 podman[262499]: 2026-01-23 19:13:50.23013879 +0000 UTC m=+0.041733601 container create e51bf636ea411b4f5e6f728b822ab627582f5cd7512fd412184db7133b77d986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ptolemy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:13:50 compute-0 systemd[1]: Started libpod-conmon-e51bf636ea411b4f5e6f728b822ab627582f5cd7512fd412184db7133b77d986.scope.
Jan 23 19:13:50 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:13:50 compute-0 podman[262499]: 2026-01-23 19:13:50.301133724 +0000 UTC m=+0.112728555 container init e51bf636ea411b4f5e6f728b822ab627582f5cd7512fd412184db7133b77d986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ptolemy, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 23 19:13:50 compute-0 podman[262499]: 2026-01-23 19:13:50.21050179 +0000 UTC m=+0.022096621 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:13:50 compute-0 podman[262499]: 2026-01-23 19:13:50.30834705 +0000 UTC m=+0.119941871 container start e51bf636ea411b4f5e6f728b822ab627582f5cd7512fd412184db7133b77d986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:13:50 compute-0 kind_ptolemy[262515]: 167 167
Jan 23 19:13:50 compute-0 podman[262499]: 2026-01-23 19:13:50.31322874 +0000 UTC m=+0.124823571 container attach e51bf636ea411b4f5e6f728b822ab627582f5cd7512fd412184db7133b77d986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 23 19:13:50 compute-0 systemd[1]: libpod-e51bf636ea411b4f5e6f728b822ab627582f5cd7512fd412184db7133b77d986.scope: Deactivated successfully.
Jan 23 19:13:50 compute-0 conmon[262515]: conmon e51bf636ea411b4f5e6f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e51bf636ea411b4f5e6f728b822ab627582f5cd7512fd412184db7133b77d986.scope/container/memory.events
Jan 23 19:13:50 compute-0 podman[262499]: 2026-01-23 19:13:50.31488588 +0000 UTC m=+0.126480691 container died e51bf636ea411b4f5e6f728b822ab627582f5cd7512fd412184db7133b77d986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ptolemy, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 23 19:13:50 compute-0 ceph-mon[75097]: pgmap v1335: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ece0c9b556f87b0709d7f2924c62c870e3fc572f21137b93938f7df7d937f588-merged.mount: Deactivated successfully.
Jan 23 19:13:50 compute-0 podman[262499]: 2026-01-23 19:13:50.36160754 +0000 UTC m=+0.173202351 container remove e51bf636ea411b4f5e6f728b822ab627582f5cd7512fd412184db7133b77d986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 23 19:13:50 compute-0 systemd[1]: libpod-conmon-e51bf636ea411b4f5e6f728b822ab627582f5cd7512fd412184db7133b77d986.scope: Deactivated successfully.
Jan 23 19:13:50 compute-0 podman[262539]: 2026-01-23 19:13:50.540221612 +0000 UTC m=+0.039957417 container create 95364fc0b7bf999e95d7dd6849467cf769ae9ce1d97d23078ab9928199d24980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_black, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 23 19:13:50 compute-0 systemd[1]: Started libpod-conmon-95364fc0b7bf999e95d7dd6849467cf769ae9ce1d97d23078ab9928199d24980.scope.
Jan 23 19:13:50 compute-0 podman[262539]: 2026-01-23 19:13:50.52416482 +0000 UTC m=+0.023900655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:13:50 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573e41288600720c40e148a7bc501ee933f0efa2d4fe91d662f7d1ed2e61499d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573e41288600720c40e148a7bc501ee933f0efa2d4fe91d662f7d1ed2e61499d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573e41288600720c40e148a7bc501ee933f0efa2d4fe91d662f7d1ed2e61499d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573e41288600720c40e148a7bc501ee933f0efa2d4fe91d662f7d1ed2e61499d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:13:50 compute-0 podman[262539]: 2026-01-23 19:13:50.647616554 +0000 UTC m=+0.147352379 container init 95364fc0b7bf999e95d7dd6849467cf769ae9ce1d97d23078ab9928199d24980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:13:50 compute-0 podman[262539]: 2026-01-23 19:13:50.660640223 +0000 UTC m=+0.160376028 container start 95364fc0b7bf999e95d7dd6849467cf769ae9ce1d97d23078ab9928199d24980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_black, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 23 19:13:50 compute-0 podman[262539]: 2026-01-23 19:13:50.664001355 +0000 UTC m=+0.163737160 container attach 95364fc0b7bf999e95d7dd6849467cf769ae9ce1d97d23078ab9928199d24980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_black, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 19:13:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:51 compute-0 lvm[262642]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:13:51 compute-0 lvm[262643]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:13:51 compute-0 lvm[262642]: VG ceph_vg1 finished
Jan 23 19:13:51 compute-0 lvm[262643]: VG ceph_vg2 finished
Jan 23 19:13:51 compute-0 lvm[262640]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:13:51 compute-0 lvm[262640]: VG ceph_vg0 finished
Jan 23 19:13:51 compute-0 podman[262631]: 2026-01-23 19:13:51.399118976 +0000 UTC m=+0.100983947 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:13:51 compute-0 angry_black[262556]: {}
Jan 23 19:13:51 compute-0 systemd[1]: libpod-95364fc0b7bf999e95d7dd6849467cf769ae9ce1d97d23078ab9928199d24980.scope: Deactivated successfully.
Jan 23 19:13:51 compute-0 podman[262539]: 2026-01-23 19:13:51.437732229 +0000 UTC m=+0.937468034 container died 95364fc0b7bf999e95d7dd6849467cf769ae9ce1d97d23078ab9928199d24980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_black, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:13:51 compute-0 systemd[1]: libpod-95364fc0b7bf999e95d7dd6849467cf769ae9ce1d97d23078ab9928199d24980.scope: Consumed 1.268s CPU time.
Jan 23 19:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-573e41288600720c40e148a7bc501ee933f0efa2d4fe91d662f7d1ed2e61499d-merged.mount: Deactivated successfully.
Jan 23 19:13:51 compute-0 podman[262539]: 2026-01-23 19:13:51.481396125 +0000 UTC m=+0.981131930 container remove 95364fc0b7bf999e95d7dd6849467cf769ae9ce1d97d23078ab9928199d24980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_black, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:13:51 compute-0 systemd[1]: libpod-conmon-95364fc0b7bf999e95d7dd6849467cf769ae9ce1d97d23078ab9928199d24980.scope: Deactivated successfully.
Jan 23 19:13:51 compute-0 sudo[262462]: pam_unix(sudo:session): session closed for user root
Jan 23 19:13:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:13:51 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:13:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:13:51 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:13:51 compute-0 sudo[262678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 19:13:51 compute-0 sudo[262678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:13:51 compute-0 sudo[262678]: pam_unix(sudo:session): session closed for user root
Jan 23 19:13:52 compute-0 ceph-mon[75097]: pgmap v1336: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:52 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:13:52 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:13:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:54 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:13:54 compute-0 ceph-mon[75097]: pgmap v1337: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:56 compute-0 ceph-mon[75097]: pgmap v1338: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:13:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3445848335' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:13:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:13:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3445848335' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:13:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/3445848335' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:13:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/3445848335' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:13:58 compute-0 ceph-mon[75097]: pgmap v1339: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:13:58 compute-0 podman[262703]: 2026-01-23 19:13:58.983744611 +0000 UTC m=+0.060217992 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 23 19:13:59 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:14:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:14:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:14:00 compute-0 ceph-mon[75097]: pgmap v1340: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:14:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:14:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:14:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:14:02 compute-0 ceph-mon[75097]: pgmap v1341: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:04 compute-0 ceph-mon[75097]: pgmap v1342: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:04 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:14:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:05 compute-0 nova_compute[240119]: 2026-01-23 19:14:05.343 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:14:06 compute-0 ceph-mon[75097]: pgmap v1343: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:06 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:14:06 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6464 writes, 29K keys, 6464 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6464 writes, 6464 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1669 writes, 8133 keys, 1669 commit groups, 1.0 writes per commit group, ingest: 10.46 MB, 0.02 MB/s
                                           Interval WAL: 1669 writes, 1669 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     29.4      1.18              0.11        16    0.074       0      0       0.0       0.0
                                             L6      1/0    8.63 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4     74.4     60.9      1.93              0.37        15    0.129     74K   8456       0.0       0.0
                                            Sum      1/0    8.63 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     46.2     49.0      3.12              0.47        31    0.100     74K   8456       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0     91.1     93.5      0.49              0.12         8    0.062     25K   2620       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     74.4     60.9      1.93              0.37        15    0.129     74K   8456       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     29.4      1.18              0.11        15    0.078       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.034, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.15 GB write, 0.06 MB/s write, 0.14 GB read, 0.06 MB/s read, 3.1 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.04 GB read, 0.08 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5639441df8d0#2 capacity: 304.00 MB usage: 17.11 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000217 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1276,16.52 MB,5.43473%) FilterBlock(32,218.05 KB,0.0700449%) IndexBlock(32,389.34 KB,0.125072%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 23 19:14:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:08 compute-0 ceph-mon[75097]: pgmap v1344: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:08 compute-0 nova_compute[240119]: 2026-01-23 19:14:08.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:14:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:09 compute-0 nova_compute[240119]: 2026-01-23 19:14:09.347 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:14:09 compute-0 nova_compute[240119]: 2026-01-23 19:14:09.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:14:09 compute-0 nova_compute[240119]: 2026-01-23 19:14:09.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:14:09 compute-0 nova_compute[240119]: 2026-01-23 19:14:09.348 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 23 19:14:09 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:14:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:10 compute-0 ceph-mon[75097]: pgmap v1345: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:10 compute-0 nova_compute[240119]: 2026-01-23 19:14:10.952 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:14:10 compute-0 nova_compute[240119]: 2026-01-23 19:14:10.952 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 23 19:14:10 compute-0 nova_compute[240119]: 2026-01-23 19:14:10.968 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 23 19:14:12 compute-0 ceph-mon[75097]: pgmap v1346: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:12 compute-0 nova_compute[240119]: 2026-01-23 19:14:12.364 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:14:12 compute-0 nova_compute[240119]: 2026-01-23 19:14:12.364 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:14:12 compute-0 nova_compute[240119]: 2026-01-23 19:14:12.364 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:14:12 compute-0 nova_compute[240119]: 2026-01-23 19:14:12.365 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:14:12 compute-0 nova_compute[240119]: 2026-01-23 19:14:12.633 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:14:12 compute-0 nova_compute[240119]: 2026-01-23 19:14:12.633 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:14:12 compute-0 nova_compute[240119]: 2026-01-23 19:14:12.633 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:14:12 compute-0 nova_compute[240119]: 2026-01-23 19:14:12.634 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:14:12 compute-0 nova_compute[240119]: 2026-01-23 19:14:12.634 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:14:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:14:13 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2646853957' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:14:13 compute-0 nova_compute[240119]: 2026-01-23 19:14:13.199 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:14:13 compute-0 nova_compute[240119]: 2026-01-23 19:14:13.364 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:14:13 compute-0 nova_compute[240119]: 2026-01-23 19:14:13.365 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4918MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:14:13 compute-0 nova_compute[240119]: 2026-01-23 19:14:13.365 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:14:13 compute-0 nova_compute[240119]: 2026-01-23 19:14:13.366 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:14:13 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2646853957' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:14:13 compute-0 nova_compute[240119]: 2026-01-23 19:14:13.465 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:14:13 compute-0 nova_compute[240119]: 2026-01-23 19:14:13.466 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:14:13 compute-0 nova_compute[240119]: 2026-01-23 19:14:13.486 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:14:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:14:13.595 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:14:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:14:13.595 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:14:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:14:13.595 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:14:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:14:13 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/986525828' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:14:14 compute-0 nova_compute[240119]: 2026-01-23 19:14:14.012 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:14:14 compute-0 nova_compute[240119]: 2026-01-23 19:14:14.018 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:14:14 compute-0 nova_compute[240119]: 2026-01-23 19:14:14.041 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:14:14 compute-0 nova_compute[240119]: 2026-01-23 19:14:14.044 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:14:14 compute-0 nova_compute[240119]: 2026-01-23 19:14:14.044 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:14:14 compute-0 nova_compute[240119]: 2026-01-23 19:14:14.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:14:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:14:14 compute-0 ceph-mon[75097]: pgmap v1347: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:14 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/986525828' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:14:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:15 compute-0 nova_compute[240119]: 2026-01-23 19:14:15.364 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:14:15 compute-0 nova_compute[240119]: 2026-01-23 19:14:15.364 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:14:15 compute-0 nova_compute[240119]: 2026-01-23 19:14:15.365 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:14:15 compute-0 nova_compute[240119]: 2026-01-23 19:14:15.873 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:14:16 compute-0 ceph-mon[75097]: pgmap v1348: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:18 compute-0 nova_compute[240119]: 2026-01-23 19:14:18.347 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:14:18 compute-0 ceph-mon[75097]: pgmap v1349: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:18 compute-0 nova_compute[240119]: 2026-01-23 19:14:18.841 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:14:19 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:14:19 compute-0 ceph-mon[75097]: pgmap v1350: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:21 compute-0 ceph-mon[75097]: pgmap v1351: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:22 compute-0 podman[262767]: 2026-01-23 19:14:22.00615049 +0000 UTC m=+0.084192857 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller)
Jan 23 19:14:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:24 compute-0 ceph-mon[75097]: pgmap v1352: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:24 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:14:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:26 compute-0 ceph-mon[75097]: pgmap v1353: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:28 compute-0 ceph-mon[75097]: pgmap v1354: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:14:28
Jan 23 19:14:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:14:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:14:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'vms', 'volumes', '.rgw.root', 'default.rgw.control', 'images', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 23 19:14:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:14:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:14:29 compute-0 podman[262794]: 2026-01-23 19:14:29.984716855 +0000 UTC m=+0.057022474 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 23 19:14:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:14:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:14:30 compute-0 ceph-mon[75097]: pgmap v1355: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:14:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:14:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:14:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:14:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:14:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:14:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:14:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:14:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:14:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:14:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:14:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:14:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:14:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:14:32 compute-0 ceph-mon[75097]: pgmap v1356: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:34 compute-0 ceph-mon[75097]: pgmap v1357: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:14:34 compute-0 nova_compute[240119]: 2026-01-23 19:14:34.778 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:14:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:36 compute-0 ceph-mon[75097]: pgmap v1358: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:38 compute-0 ceph-mon[75097]: pgmap v1359: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:14:39 compute-0 ceph-mon[75097]: pgmap v1360: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659032622064907 of space, bias 1.0, pg target 0.1997709786619472 quantized to 32 (current 32)
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005188356691283345 of space, bias 4.0, pg target 0.6226028029540014 quantized to 16 (current 16)
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:14:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:42 compute-0 ceph-mon[75097]: pgmap v1361: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:14:44 compute-0 ceph-mon[75097]: pgmap v1362: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:46 compute-0 ceph-mon[75097]: pgmap v1363: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:49 compute-0 ceph-mon[75097]: pgmap v1364: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:14:50 compute-0 ceph-mon[75097]: pgmap v1365: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:51 compute-0 sudo[262813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:14:51 compute-0 sudo[262813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:14:51 compute-0 sudo[262813]: pam_unix(sudo:session): session closed for user root
Jan 23 19:14:51 compute-0 sudo[262838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 19:14:51 compute-0 sudo[262838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:14:52 compute-0 sudo[262838]: pam_unix(sudo:session): session closed for user root
Jan 23 19:14:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:14:52 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:14:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 19:14:52 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:14:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 19:14:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:53 compute-0 podman[262895]: 2026-01-23 19:14:53.01963908 +0000 UTC m=+0.091894484 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 23 19:14:53 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:14:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 19:14:53 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:14:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 19:14:53 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:14:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:14:53 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:14:53 compute-0 sudo[262922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:14:53 compute-0 sudo[262922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:14:53 compute-0 sudo[262922]: pam_unix(sudo:session): session closed for user root
Jan 23 19:14:53 compute-0 sudo[262947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 19:14:53 compute-0 sudo[262947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:14:53 compute-0 ceph-mon[75097]: pgmap v1366: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:53 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:14:53 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:14:54 compute-0 podman[262985]: 2026-01-23 19:14:53.950789818 +0000 UTC m=+0.037052855 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:14:54 compute-0 podman[262985]: 2026-01-23 19:14:54.043879792 +0000 UTC m=+0.130142849 container create 2f3482794c422084d713fb9d68e106c819de0ee169564d05d22f520af6b4e11c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_elion, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 19:14:54 compute-0 systemd[1]: Started libpod-conmon-2f3482794c422084d713fb9d68e106c819de0ee169564d05d22f520af6b4e11c.scope.
Jan 23 19:14:54 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:14:54 compute-0 podman[262985]: 2026-01-23 19:14:54.578403785 +0000 UTC m=+0.664666802 container init 2f3482794c422084d713fb9d68e106c819de0ee169564d05d22f520af6b4e11c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_elion, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 23 19:14:54 compute-0 podman[262985]: 2026-01-23 19:14:54.586503082 +0000 UTC m=+0.672766129 container start 2f3482794c422084d713fb9d68e106c819de0ee169564d05d22f520af6b4e11c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_elion, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:14:54 compute-0 hopeful_elion[263001]: 167 167
Jan 23 19:14:54 compute-0 systemd[1]: libpod-2f3482794c422084d713fb9d68e106c819de0ee169564d05d22f520af6b4e11c.scope: Deactivated successfully.
Jan 23 19:14:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:14:55 compute-0 podman[262985]: 2026-01-23 19:14:55.514523024 +0000 UTC m=+1.600786081 container attach 2f3482794c422084d713fb9d68e106c819de0ee169564d05d22f520af6b4e11c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:14:55 compute-0 podman[262985]: 2026-01-23 19:14:55.515281313 +0000 UTC m=+1.601544370 container died 2f3482794c422084d713fb9d68e106c819de0ee169564d05d22f520af6b4e11c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_elion, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:14:55 compute-0 ceph-mon[75097]: pgmap v1367: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:14:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:14:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:14:55 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:14:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c3d7fd03476ac97908331472b0700d4c9068c2542d314cad01d106ddbb9c7b0-merged.mount: Deactivated successfully.
Jan 23 19:14:56 compute-0 podman[262985]: 2026-01-23 19:14:56.118363951 +0000 UTC m=+2.204626968 container remove 2f3482794c422084d713fb9d68e106c819de0ee169564d05d22f520af6b4e11c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_elion, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:14:56 compute-0 systemd[1]: libpod-conmon-2f3482794c422084d713fb9d68e106c819de0ee169564d05d22f520af6b4e11c.scope: Deactivated successfully.
Jan 23 19:14:56 compute-0 podman[263027]: 2026-01-23 19:14:56.259525227 +0000 UTC m=+0.026488828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:14:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:56 compute-0 podman[263027]: 2026-01-23 19:14:56.880008369 +0000 UTC m=+0.646971950 container create 2a024acd825fb0e4b60077fad2312065e2d022c8037a5c16293fe045344aba28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_yonath, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:14:56 compute-0 systemd[1]: Started libpod-conmon-2a024acd825fb0e4b60077fad2312065e2d022c8037a5c16293fe045344aba28.scope.
Jan 23 19:14:56 compute-0 ceph-mon[75097]: pgmap v1368: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:56 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c69008a33ef21333b6e2a22b06ed5b27d2ea3ea283ea283b0d87c1855dddd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c69008a33ef21333b6e2a22b06ed5b27d2ea3ea283ea283b0d87c1855dddd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c69008a33ef21333b6e2a22b06ed5b27d2ea3ea283ea283b0d87c1855dddd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c69008a33ef21333b6e2a22b06ed5b27d2ea3ea283ea283b0d87c1855dddd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c69008a33ef21333b6e2a22b06ed5b27d2ea3ea283ea283b0d87c1855dddd1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 19:14:57 compute-0 podman[263027]: 2026-01-23 19:14:57.064201407 +0000 UTC m=+0.831165008 container init 2a024acd825fb0e4b60077fad2312065e2d022c8037a5c16293fe045344aba28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 19:14:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:14:57 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/33385811' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:14:57 compute-0 podman[263027]: 2026-01-23 19:14:57.071974607 +0000 UTC m=+0.838938188 container start 2a024acd825fb0e4b60077fad2312065e2d022c8037a5c16293fe045344aba28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 23 19:14:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:14:57 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/33385811' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:14:57 compute-0 podman[263027]: 2026-01-23 19:14:57.100769419 +0000 UTC m=+0.867733030 container attach 2a024acd825fb0e4b60077fad2312065e2d022c8037a5c16293fe045344aba28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_yonath, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:14:57 compute-0 hungry_yonath[263044]: --> passed data devices: 0 physical, 3 LVM
Jan 23 19:14:57 compute-0 hungry_yonath[263044]: --> All data devices are unavailable
Jan 23 19:14:57 compute-0 systemd[1]: libpod-2a024acd825fb0e4b60077fad2312065e2d022c8037a5c16293fe045344aba28.scope: Deactivated successfully.
Jan 23 19:14:57 compute-0 podman[263027]: 2026-01-23 19:14:57.555196657 +0000 UTC m=+1.322160238 container died 2a024acd825fb0e4b60077fad2312065e2d022c8037a5c16293fe045344aba28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_yonath, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:14:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-24c69008a33ef21333b6e2a22b06ed5b27d2ea3ea283ea283b0d87c1855dddd1-merged.mount: Deactivated successfully.
Jan 23 19:14:58 compute-0 ceph-mon[75097]: pgmap v1369: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:58 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/33385811' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:14:58 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/33385811' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:14:58 compute-0 podman[263027]: 2026-01-23 19:14:58.069018144 +0000 UTC m=+1.835981725 container remove 2a024acd825fb0e4b60077fad2312065e2d022c8037a5c16293fe045344aba28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:14:58 compute-0 sudo[262947]: pam_unix(sudo:session): session closed for user root
Jan 23 19:14:58 compute-0 systemd[1]: libpod-conmon-2a024acd825fb0e4b60077fad2312065e2d022c8037a5c16293fe045344aba28.scope: Deactivated successfully.
Jan 23 19:14:58 compute-0 sudo[263074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:14:58 compute-0 sudo[263074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:14:58 compute-0 sudo[263074]: pam_unix(sudo:session): session closed for user root
Jan 23 19:14:58 compute-0 sudo[263099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 19:14:58 compute-0 sudo[263099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:14:58 compute-0 podman[263137]: 2026-01-23 19:14:58.517997487 +0000 UTC m=+0.041721279 container create 2f97443cccb3523b710e02b1476f00d90930937ff1c8424da913829e7cc9754b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:14:58 compute-0 systemd[1]: Started libpod-conmon-2f97443cccb3523b710e02b1476f00d90930937ff1c8424da913829e7cc9754b.scope.
Jan 23 19:14:58 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:14:58 compute-0 podman[263137]: 2026-01-23 19:14:58.498809629 +0000 UTC m=+0.022533451 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:14:58 compute-0 podman[263137]: 2026-01-23 19:14:58.605088025 +0000 UTC m=+0.128811847 container init 2f97443cccb3523b710e02b1476f00d90930937ff1c8424da913829e7cc9754b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_fermat, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 23 19:14:58 compute-0 podman[263137]: 2026-01-23 19:14:58.612021584 +0000 UTC m=+0.135745366 container start 2f97443cccb3523b710e02b1476f00d90930937ff1c8424da913829e7cc9754b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 23 19:14:58 compute-0 keen_fermat[263153]: 167 167
Jan 23 19:14:58 compute-0 systemd[1]: libpod-2f97443cccb3523b710e02b1476f00d90930937ff1c8424da913829e7cc9754b.scope: Deactivated successfully.
Jan 23 19:14:58 compute-0 conmon[263153]: conmon 2f97443cccb3523b710e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f97443cccb3523b710e02b1476f00d90930937ff1c8424da913829e7cc9754b.scope/container/memory.events
Jan 23 19:14:58 compute-0 podman[263137]: 2026-01-23 19:14:58.619943557 +0000 UTC m=+0.143667349 container attach 2f97443cccb3523b710e02b1476f00d90930937ff1c8424da913829e7cc9754b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_fermat, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 19:14:58 compute-0 podman[263137]: 2026-01-23 19:14:58.620366088 +0000 UTC m=+0.144089880 container died 2f97443cccb3523b710e02b1476f00d90930937ff1c8424da913829e7cc9754b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 23 19:14:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e80a979efcf687996fb6433974fa30d0c7ca9c1f3c28230c7667aed24718984-merged.mount: Deactivated successfully.
Jan 23 19:14:58 compute-0 podman[263137]: 2026-01-23 19:14:58.67455538 +0000 UTC m=+0.198279172 container remove 2f97443cccb3523b710e02b1476f00d90930937ff1c8424da913829e7cc9754b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 23 19:14:58 compute-0 systemd[1]: libpod-conmon-2f97443cccb3523b710e02b1476f00d90930937ff1c8424da913829e7cc9754b.scope: Deactivated successfully.
Jan 23 19:14:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:14:58 compute-0 podman[263175]: 2026-01-23 19:14:58.822089744 +0000 UTC m=+0.022557153 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:14:58 compute-0 podman[263175]: 2026-01-23 19:14:58.956365823 +0000 UTC m=+0.156833212 container create c668aab4b8b3f61774e3534ce1b9e6c7dc52e1813afecc0d0a1fd3513abc4d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:14:59 compute-0 systemd[1]: Started libpod-conmon-c668aab4b8b3f61774e3534ce1b9e6c7dc52e1813afecc0d0a1fd3513abc4d91.scope.
Jan 23 19:14:59 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f880b6310cda77466dc733bf9321d2ea7ada321161119a4cac2d4a95595dfcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f880b6310cda77466dc733bf9321d2ea7ada321161119a4cac2d4a95595dfcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f880b6310cda77466dc733bf9321d2ea7ada321161119a4cac2d4a95595dfcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f880b6310cda77466dc733bf9321d2ea7ada321161119a4cac2d4a95595dfcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:14:59 compute-0 podman[263175]: 2026-01-23 19:14:59.273903756 +0000 UTC m=+0.474371205 container init c668aab4b8b3f61774e3534ce1b9e6c7dc52e1813afecc0d0a1fd3513abc4d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 19:14:59 compute-0 podman[263175]: 2026-01-23 19:14:59.281752198 +0000 UTC m=+0.482219587 container start c668aab4b8b3f61774e3534ce1b9e6c7dc52e1813afecc0d0a1fd3513abc4d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_mendeleev, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:14:59 compute-0 podman[263175]: 2026-01-23 19:14:59.29287647 +0000 UTC m=+0.493343879 container attach c668aab4b8b3f61774e3534ce1b9e6c7dc52e1813afecc0d0a1fd3513abc4d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]: {
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:     "0": [
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:         {
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "devices": [
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "/dev/loop3"
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             ],
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "lv_name": "ceph_lv0",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "lv_size": "21470642176",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "name": "ceph_lv0",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "tags": {
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.cluster_name": "ceph",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.crush_device_class": "",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.encrypted": "0",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.objectstore": "bluestore",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.osd_id": "0",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.type": "block",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.vdo": "0",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.with_tpm": "0"
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             },
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "type": "block",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "vg_name": "ceph_vg0"
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:         }
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:     ],
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:     "1": [
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:         {
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "devices": [
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "/dev/loop4"
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             ],
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "lv_name": "ceph_lv1",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "lv_size": "21470642176",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "name": "ceph_lv1",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "tags": {
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.cluster_name": "ceph",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.crush_device_class": "",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.encrypted": "0",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.objectstore": "bluestore",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.osd_id": "1",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.type": "block",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.vdo": "0",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.with_tpm": "0"
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             },
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "type": "block",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "vg_name": "ceph_vg1"
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:         }
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:     ],
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:     "2": [
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:         {
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "devices": [
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "/dev/loop5"
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             ],
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "lv_name": "ceph_lv2",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "lv_size": "21470642176",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "name": "ceph_lv2",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "tags": {
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.cluster_name": "ceph",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.crush_device_class": "",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.encrypted": "0",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.objectstore": "bluestore",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.osd_id": "2",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.type": "block",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.vdo": "0",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:                 "ceph.with_tpm": "0"
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             },
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "type": "block",
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:             "vg_name": "ceph_vg2"
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:         }
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]:     ]
Jan 23 19:14:59 compute-0 zealous_mendeleev[263191]: }
Jan 23 19:14:59 compute-0 systemd[1]: libpod-c668aab4b8b3f61774e3534ce1b9e6c7dc52e1813afecc0d0a1fd3513abc4d91.scope: Deactivated successfully.
Jan 23 19:14:59 compute-0 podman[263175]: 2026-01-23 19:14:59.589017271 +0000 UTC m=+0.789484660 container died c668aab4b8b3f61774e3534ce1b9e6c7dc52e1813afecc0d0a1fd3513abc4d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:15:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f880b6310cda77466dc733bf9321d2ea7ada321161119a4cac2d4a95595dfcf-merged.mount: Deactivated successfully.
Jan 23 19:15:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:15:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:15:00 compute-0 ceph-mon[75097]: pgmap v1370: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:15:00 compute-0 podman[263175]: 2026-01-23 19:15:00.22530712 +0000 UTC m=+1.425774509 container remove c668aab4b8b3f61774e3534ce1b9e6c7dc52e1813afecc0d0a1fd3513abc4d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_mendeleev, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:15:00 compute-0 systemd[1]: libpod-conmon-c668aab4b8b3f61774e3534ce1b9e6c7dc52e1813afecc0d0a1fd3513abc4d91.scope: Deactivated successfully.
Jan 23 19:15:00 compute-0 sudo[263099]: pam_unix(sudo:session): session closed for user root
Jan 23 19:15:00 compute-0 podman[263212]: 2026-01-23 19:15:00.293148986 +0000 UTC m=+0.237590522 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 23 19:15:00 compute-0 sudo[263229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:15:00 compute-0 sudo[263229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:15:00 compute-0 sudo[263229]: pam_unix(sudo:session): session closed for user root
Jan 23 19:15:00 compute-0 sudo[263256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 19:15:00 compute-0 sudo[263256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:15:00 compute-0 podman[263293]: 2026-01-23 19:15:00.651752063 +0000 UTC m=+0.031573372 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:15:00 compute-0 podman[263293]: 2026-01-23 19:15:00.77489168 +0000 UTC m=+0.154712979 container create 50a191eaefea4244727444ff3a14e359f7b1965a439591138ea2432126cc7be8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:15:00 compute-0 systemd[1]: Started libpod-conmon-50a191eaefea4244727444ff3a14e359f7b1965a439591138ea2432126cc7be8.scope.
Jan 23 19:15:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:15:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:15:00 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:15:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:15:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:15:01 compute-0 podman[263293]: 2026-01-23 19:15:01.076818613 +0000 UTC m=+0.456639932 container init 50a191eaefea4244727444ff3a14e359f7b1965a439591138ea2432126cc7be8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:15:01 compute-0 podman[263293]: 2026-01-23 19:15:01.086312955 +0000 UTC m=+0.466134244 container start 50a191eaefea4244727444ff3a14e359f7b1965a439591138ea2432126cc7be8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:15:01 compute-0 awesome_nobel[263309]: 167 167
Jan 23 19:15:01 compute-0 systemd[1]: libpod-50a191eaefea4244727444ff3a14e359f7b1965a439591138ea2432126cc7be8.scope: Deactivated successfully.
Jan 23 19:15:01 compute-0 podman[263293]: 2026-01-23 19:15:01.09552825 +0000 UTC m=+0.475349569 container attach 50a191eaefea4244727444ff3a14e359f7b1965a439591138ea2432126cc7be8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 23 19:15:01 compute-0 podman[263293]: 2026-01-23 19:15:01.096710969 +0000 UTC m=+0.476532258 container died 50a191eaefea4244727444ff3a14e359f7b1965a439591138ea2432126cc7be8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 23 19:15:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-87bb0b9c9a6aa341e66e83e646c6502d7c28017ae8921473144d0d7b510d4e81-merged.mount: Deactivated successfully.
Jan 23 19:15:01 compute-0 podman[263293]: 2026-01-23 19:15:01.95877857 +0000 UTC m=+1.338599859 container remove 50a191eaefea4244727444ff3a14e359f7b1965a439591138ea2432126cc7be8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 23 19:15:02 compute-0 systemd[1]: libpod-conmon-50a191eaefea4244727444ff3a14e359f7b1965a439591138ea2432126cc7be8.scope: Deactivated successfully.
Jan 23 19:15:02 compute-0 podman[263332]: 2026-01-23 19:15:02.198790261 +0000 UTC m=+0.109476854 container create d38195f0ab2ee597e831e53a12e9b438022a3bd8ff677b0e521fe5f0b5bc4123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_rubin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 23 19:15:02 compute-0 podman[263332]: 2026-01-23 19:15:02.113401546 +0000 UTC m=+0.024088149 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:15:02 compute-0 systemd[1]: Started libpod-conmon-d38195f0ab2ee597e831e53a12e9b438022a3bd8ff677b0e521fe5f0b5bc4123.scope.
Jan 23 19:15:02 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b3d76e02a7bda7c348d3d1e87d9ed97994dbbaf8f5cd8b816ab52f486fbc570/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b3d76e02a7bda7c348d3d1e87d9ed97994dbbaf8f5cd8b816ab52f486fbc570/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b3d76e02a7bda7c348d3d1e87d9ed97994dbbaf8f5cd8b816ab52f486fbc570/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b3d76e02a7bda7c348d3d1e87d9ed97994dbbaf8f5cd8b816ab52f486fbc570/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:15:02 compute-0 podman[263332]: 2026-01-23 19:15:02.483157675 +0000 UTC m=+0.393844268 container init d38195f0ab2ee597e831e53a12e9b438022a3bd8ff677b0e521fe5f0b5bc4123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_rubin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:15:02 compute-0 podman[263332]: 2026-01-23 19:15:02.491385166 +0000 UTC m=+0.402071759 container start d38195f0ab2ee597e831e53a12e9b438022a3bd8ff677b0e521fe5f0b5bc4123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 23 19:15:02 compute-0 ceph-mon[75097]: pgmap v1371: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:02 compute-0 podman[263332]: 2026-01-23 19:15:02.650723587 +0000 UTC m=+0.561410190 container attach d38195f0ab2ee597e831e53a12e9b438022a3bd8ff677b0e521fe5f0b5bc4123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_rubin, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:15:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:03 compute-0 lvm[263427]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:15:03 compute-0 lvm[263427]: VG ceph_vg0 finished
Jan 23 19:15:03 compute-0 lvm[263428]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:15:03 compute-0 lvm[263428]: VG ceph_vg1 finished
Jan 23 19:15:03 compute-0 lvm[263430]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:15:03 compute-0 lvm[263430]: VG ceph_vg2 finished
Jan 23 19:15:03 compute-0 infallible_rubin[263348]: {}
Jan 23 19:15:03 compute-0 systemd[1]: libpod-d38195f0ab2ee597e831e53a12e9b438022a3bd8ff677b0e521fe5f0b5bc4123.scope: Deactivated successfully.
Jan 23 19:15:03 compute-0 podman[263332]: 2026-01-23 19:15:03.397571465 +0000 UTC m=+1.308258058 container died d38195f0ab2ee597e831e53a12e9b438022a3bd8ff677b0e521fe5f0b5bc4123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_rubin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 19:15:03 compute-0 systemd[1]: libpod-d38195f0ab2ee597e831e53a12e9b438022a3bd8ff677b0e521fe5f0b5bc4123.scope: Consumed 1.469s CPU time.
Jan 23 19:15:04 compute-0 ceph-mon[75097]: pgmap v1372: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b3d76e02a7bda7c348d3d1e87d9ed97994dbbaf8f5cd8b816ab52f486fbc570-merged.mount: Deactivated successfully.
Jan 23 19:15:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:05 compute-0 podman[263332]: 2026-01-23 19:15:05.129397835 +0000 UTC m=+3.040084428 container remove d38195f0ab2ee597e831e53a12e9b438022a3bd8ff677b0e521fe5f0b5bc4123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_rubin, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:15:05 compute-0 systemd[1]: libpod-conmon-d38195f0ab2ee597e831e53a12e9b438022a3bd8ff677b0e521fe5f0b5bc4123.scope: Deactivated successfully.
Jan 23 19:15:05 compute-0 sudo[263256]: pam_unix(sudo:session): session closed for user root
Jan 23 19:15:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:15:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:15:05 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:15:05 compute-0 nova_compute[240119]: 2026-01-23 19:15:05.627 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:15:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:15:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:07 compute-0 ceph-mon[75097]: pgmap v1373: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:07 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:15:07 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:15:07 compute-0 sudo[263445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 19:15:07 compute-0 sudo[263445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:15:07 compute-0 sudo[263445]: pam_unix(sudo:session): session closed for user root
Jan 23 19:15:08 compute-0 ceph-mon[75097]: pgmap v1374: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:08 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:15:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:10 compute-0 ceph-mon[75097]: pgmap v1375: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:10 compute-0 nova_compute[240119]: 2026-01-23 19:15:10.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:15:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:15:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:11 compute-0 nova_compute[240119]: 2026-01-23 19:15:11.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:15:11 compute-0 nova_compute[240119]: 2026-01-23 19:15:11.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:15:12 compute-0 nova_compute[240119]: 2026-01-23 19:15:12.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:15:12 compute-0 nova_compute[240119]: 2026-01-23 19:15:12.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:15:12 compute-0 nova_compute[240119]: 2026-01-23 19:15:12.454 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:15:12 compute-0 nova_compute[240119]: 2026-01-23 19:15:12.454 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:15:12 compute-0 nova_compute[240119]: 2026-01-23 19:15:12.455 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:15:12 compute-0 nova_compute[240119]: 2026-01-23 19:15:12.455 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:15:12 compute-0 nova_compute[240119]: 2026-01-23 19:15:12.455 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:15:12 compute-0 ceph-mon[75097]: pgmap v1376: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:15:13 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3909491628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:15:13 compute-0 nova_compute[240119]: 2026-01-23 19:15:13.066 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:15:13 compute-0 nova_compute[240119]: 2026-01-23 19:15:13.233 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:15:13 compute-0 nova_compute[240119]: 2026-01-23 19:15:13.234 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4907MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:15:13 compute-0 nova_compute[240119]: 2026-01-23 19:15:13.234 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:15:13 compute-0 nova_compute[240119]: 2026-01-23 19:15:13.235 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:15:13 compute-0 nova_compute[240119]: 2026-01-23 19:15:13.392 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:15:13 compute-0 nova_compute[240119]: 2026-01-23 19:15:13.392 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:15:13 compute-0 nova_compute[240119]: 2026-01-23 19:15:13.506 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Refreshing inventories for resource provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 19:15:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:15:13.596 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:15:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:15:13.596 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:15:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:15:13.596 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:15:13 compute-0 nova_compute[240119]: 2026-01-23 19:15:13.615 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Updating ProviderTree inventory for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 19:15:13 compute-0 nova_compute[240119]: 2026-01-23 19:15:13.615 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Updating inventory in ProviderTree for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 19:15:13 compute-0 nova_compute[240119]: 2026-01-23 19:15:13.634 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Refreshing aggregate associations for resource provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 19:15:13 compute-0 nova_compute[240119]: 2026-01-23 19:15:13.662 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Refreshing trait associations for resource provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE42,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 19:15:13 compute-0 nova_compute[240119]: 2026-01-23 19:15:13.680 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:15:13 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3909491628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:15:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:15:14 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2872169748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:15:14 compute-0 nova_compute[240119]: 2026-01-23 19:15:14.241 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:15:14 compute-0 nova_compute[240119]: 2026-01-23 19:15:14.247 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:15:14 compute-0 nova_compute[240119]: 2026-01-23 19:15:14.301 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:15:14 compute-0 nova_compute[240119]: 2026-01-23 19:15:14.303 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:15:14 compute-0 nova_compute[240119]: 2026-01-23 19:15:14.303 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:15:14 compute-0 ceph-mon[75097]: pgmap v1377: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:14 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2872169748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:15:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:15 compute-0 nova_compute[240119]: 2026-01-23 19:15:15.303 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:15:15 compute-0 nova_compute[240119]: 2026-01-23 19:15:15.304 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:15:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:15:16 compute-0 nova_compute[240119]: 2026-01-23 19:15:16.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:15:16 compute-0 nova_compute[240119]: 2026-01-23 19:15:16.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:15:16 compute-0 nova_compute[240119]: 2026-01-23 19:15:16.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:15:16 compute-0 ceph-mon[75097]: pgmap v1378: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:17 compute-0 nova_compute[240119]: 2026-01-23 19:15:16.999 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:15:17 compute-0 ceph-mon[75097]: pgmap v1379: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:19 compute-0 nova_compute[240119]: 2026-01-23 19:15:19.347 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:15:20 compute-0 ceph-mon[75097]: pgmap v1380: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:15:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:21 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:15:21 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 9140 writes, 33K keys, 9140 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9140 writes, 2255 syncs, 4.05 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2041 writes, 5837 keys, 2041 commit groups, 1.0 writes per commit group, ingest: 7.08 MB, 0.01 MB/s
                                           Interval WAL: 2041 writes, 763 syncs, 2.67 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 19:15:22 compute-0 ceph-mon[75097]: pgmap v1381: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:24 compute-0 podman[263514]: 2026-01-23 19:15:24.066508003 +0000 UTC m=+0.134043504 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 19:15:24 compute-0 ceph-mon[75097]: pgmap v1382: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:15:26 compute-0 nova_compute[240119]: 2026-01-23 19:15:26.176 240123 DEBUG oslo_concurrency.processutils [None req-3de0b70a-14f1-4fda-8ad9-304ec1596134 8bc2eb15461742fb9aafbd3726295065 03b69a8a6d0b48389cffb70904385997 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:15:26 compute-0 nova_compute[240119]: 2026-01-23 19:15:26.210 240123 DEBUG oslo_concurrency.processutils [None req-3de0b70a-14f1-4fda-8ad9-304ec1596134 8bc2eb15461742fb9aafbd3726295065 03b69a8a6d0b48389cffb70904385997 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:15:26 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:15:26 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 14K writes, 55K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 4560 syncs, 3.24 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3386 writes, 10K keys, 3386 commit groups, 1.0 writes per commit group, ingest: 14.83 MB, 0.02 MB/s
                                           Interval WAL: 3386 writes, 1322 syncs, 2.56 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 19:15:26 compute-0 ceph-mon[75097]: pgmap v1383: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:28 compute-0 ceph-mon[75097]: pgmap v1384: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:15:28
Jan 23 19:15:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:15:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:15:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms']
Jan 23 19:15:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:15:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:15:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:15:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:15:30 compute-0 ceph-mon[75097]: pgmap v1385: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:15:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:15:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:15:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:15:30 compute-0 podman[263541]: 2026-01-23 19:15:30.994468505 +0000 UTC m=+0.078620242 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 23 19:15:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:15:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:15:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:15:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:15:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:15:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:15:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:15:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:15:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:15:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:15:31 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:15:31 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 9033 writes, 33K keys, 9033 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9033 writes, 2107 syncs, 4.29 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1584 writes, 3841 keys, 1584 commit groups, 1.0 writes per commit group, ingest: 2.18 MB, 0.00 MB/s
                                           Interval WAL: 1585 writes, 496 syncs, 3.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 19:15:31 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:15:31.785 155616 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:3d:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:eb:2e:86:1a:4d'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 19:15:31 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:15:31.786 155616 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 19:15:32 compute-0 ceph-mon[75097]: pgmap v1386: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:34 compute-0 ceph-mon[75097]: pgmap v1387: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:34 compute-0 ceph-mgr[75390]: [devicehealth INFO root] Check health
Jan 23 19:15:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:15:36 compute-0 ceph-mon[75097]: pgmap v1388: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:36 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:15:36.788 155616 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d273cd83-f2f8-4b91-b18f-957051ff2a6c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 19:15:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:38 compute-0 ceph-mon[75097]: pgmap v1389: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:15:40 compute-0 ceph-mon[75097]: pgmap v1390: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659032622064907 of space, bias 1.0, pg target 0.1997709786619472 quantized to 32 (current 32)
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005188356691283345 of space, bias 4.0, pg target 0.6226028029540014 quantized to 16 (current 16)
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:15:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:42 compute-0 ceph-mon[75097]: pgmap v1391: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:44 compute-0 ceph-mon[75097]: pgmap v1392: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:15:46 compute-0 ceph-mon[75097]: pgmap v1393: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:48 compute-0 ceph-mon[75097]: pgmap v1394: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:15:49.523915) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195749523967, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2302, "num_deletes": 507, "total_data_size": 3569449, "memory_usage": 3612352, "flush_reason": "Manual Compaction"}
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195749556532, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3494812, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28835, "largest_seqno": 31136, "table_properties": {"data_size": 3484706, "index_size": 6024, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 22688, "raw_average_key_size": 19, "raw_value_size": 3462809, "raw_average_value_size": 2932, "num_data_blocks": 267, "num_entries": 1181, "num_filter_entries": 1181, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769195516, "oldest_key_time": 1769195516, "file_creation_time": 1769195749, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 32706 microseconds, and 10995 cpu microseconds.
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:15:49.556624) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3494812 bytes OK
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:15:49.556644) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:15:49.560019) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:15:49.560032) EVENT_LOG_v1 {"time_micros": 1769195749560027, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:15:49.560048) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3558852, prev total WAL file size 3558852, number of live WAL files 2.
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:15:49.560992) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3412KB)], [62(8835KB)]
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195749561040, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12542755, "oldest_snapshot_seqno": -1}
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6190 keys, 10723485 bytes, temperature: kUnknown
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195749648984, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10723485, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10679777, "index_size": 27150, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15493, "raw_key_size": 156131, "raw_average_key_size": 25, "raw_value_size": 10566546, "raw_average_value_size": 1707, "num_data_blocks": 1108, "num_entries": 6190, "num_filter_entries": 6190, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 1769195749, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:15:49.649250) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10723485 bytes
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:15:49.650935) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.5 rd, 121.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.6 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7217, records dropped: 1027 output_compression: NoCompression
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:15:49.650950) EVENT_LOG_v1 {"time_micros": 1769195749650943, "job": 34, "event": "compaction_finished", "compaction_time_micros": 88035, "compaction_time_cpu_micros": 23134, "output_level": 6, "num_output_files": 1, "total_output_size": 10723485, "num_input_records": 7217, "num_output_records": 6190, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195749651647, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195749653342, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:15:49.560882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:15:49.653507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:15:49.653513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:15:49.653514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:15:49.653516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:15:49 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:15:49.653518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:15:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:15:50 compute-0 ceph-mon[75097]: pgmap v1395: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:52 compute-0 ceph-mon[75097]: pgmap v1396: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:54 compute-0 ceph-mon[75097]: pgmap v1397: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:55 compute-0 podman[263561]: 2026-01-23 19:15:55.011526853 +0000 UTC m=+0.083241164 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 19:15:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:15:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:15:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2643733941' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:15:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:15:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2643733941' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:15:57 compute-0 ceph-mon[75097]: pgmap v1398: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:58 compute-0 ceph-mon[75097]: pgmap v1399: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:15:58 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/2643733941' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:15:58 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/2643733941' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:15:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:16:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:16:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:16:00 compute-0 ceph-mon[75097]: pgmap v1400: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:16:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:16:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:16:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:16:01 compute-0 ceph-mon[75097]: pgmap v1401: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:01 compute-0 podman[263587]: 2026-01-23 19:16:01.968388098 +0000 UTC m=+0.051502149 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 19:16:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:03 compute-0 ceph-mon[75097]: pgmap v1402: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:05 compute-0 nova_compute[240119]: 2026-01-23 19:16:05.343 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:16:05 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:16:05 compute-0 ceph-mon[75097]: pgmap v1403: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:07 compute-0 sudo[263606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:16:07 compute-0 sudo[263606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:16:07 compute-0 sudo[263606]: pam_unix(sudo:session): session closed for user root
Jan 23 19:16:07 compute-0 sudo[263631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 23 19:16:07 compute-0 sudo[263631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:16:07 compute-0 sudo[263631]: pam_unix(sudo:session): session closed for user root
Jan 23 19:16:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:16:07 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:16:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:16:07 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:16:08 compute-0 sudo[263675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:16:08 compute-0 sudo[263675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:16:08 compute-0 sudo[263675]: pam_unix(sudo:session): session closed for user root
Jan 23 19:16:08 compute-0 sudo[263700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 19:16:08 compute-0 sudo[263700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:16:08 compute-0 ceph-mon[75097]: pgmap v1404: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:08 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:16:08 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:16:08 compute-0 sudo[263700]: pam_unix(sudo:session): session closed for user root
Jan 23 19:16:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:16:08 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:16:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 19:16:08 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:16:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 19:16:08 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:16:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 19:16:08 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:16:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 19:16:08 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:16:08 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:16:08 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:16:08 compute-0 sudo[263755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:16:08 compute-0 sudo[263755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:16:08 compute-0 sudo[263755]: pam_unix(sudo:session): session closed for user root
Jan 23 19:16:08 compute-0 sudo[263780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 19:16:08 compute-0 sudo[263780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:16:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:09 compute-0 podman[263817]: 2026-01-23 19:16:09.050674355 +0000 UTC m=+0.046474286 container create c1e1b0bfe0db63adbfb815a45cfa6b3b900ee97a22395161b8e53984806018a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wozniak, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:16:09 compute-0 systemd[1]: Started libpod-conmon-c1e1b0bfe0db63adbfb815a45cfa6b3b900ee97a22395161b8e53984806018a9.scope.
Jan 23 19:16:09 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:16:09 compute-0 podman[263817]: 2026-01-23 19:16:09.028303639 +0000 UTC m=+0.024103600 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:16:09 compute-0 podman[263817]: 2026-01-23 19:16:09.139325279 +0000 UTC m=+0.135125240 container init c1e1b0bfe0db63adbfb815a45cfa6b3b900ee97a22395161b8e53984806018a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wozniak, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 23 19:16:09 compute-0 podman[263817]: 2026-01-23 19:16:09.14832744 +0000 UTC m=+0.144127371 container start c1e1b0bfe0db63adbfb815a45cfa6b3b900ee97a22395161b8e53984806018a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wozniak, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 23 19:16:09 compute-0 crazy_wozniak[263833]: 167 167
Jan 23 19:16:09 compute-0 systemd[1]: libpod-c1e1b0bfe0db63adbfb815a45cfa6b3b900ee97a22395161b8e53984806018a9.scope: Deactivated successfully.
Jan 23 19:16:09 compute-0 podman[263817]: 2026-01-23 19:16:09.155267499 +0000 UTC m=+0.151067450 container attach c1e1b0bfe0db63adbfb815a45cfa6b3b900ee97a22395161b8e53984806018a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 23 19:16:09 compute-0 podman[263817]: 2026-01-23 19:16:09.156722525 +0000 UTC m=+0.152522476 container died c1e1b0bfe0db63adbfb815a45cfa6b3b900ee97a22395161b8e53984806018a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:16:09 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:16:09 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:16:09 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:16:09 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:16:09 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:16:09 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:16:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8e2ce143bdaae718168baafa5729e1bef2323bfab44ee58f474288f7ee550d5-merged.mount: Deactivated successfully.
Jan 23 19:16:09 compute-0 podman[263817]: 2026-01-23 19:16:09.24906516 +0000 UTC m=+0.244865081 container remove c1e1b0bfe0db63adbfb815a45cfa6b3b900ee97a22395161b8e53984806018a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 23 19:16:09 compute-0 systemd[1]: libpod-conmon-c1e1b0bfe0db63adbfb815a45cfa6b3b900ee97a22395161b8e53984806018a9.scope: Deactivated successfully.
Jan 23 19:16:09 compute-0 podman[263857]: 2026-01-23 19:16:09.41288578 +0000 UTC m=+0.044436146 container create e596286b43fa6b58fbbf37d066779552c8ed27c8049d928e046ffb8aa72eb032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 23 19:16:09 compute-0 systemd[1]: Started libpod-conmon-e596286b43fa6b58fbbf37d066779552c8ed27c8049d928e046ffb8aa72eb032.scope.
Jan 23 19:16:09 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8624d5eef2fae7f58a3863d6273a99e6b5fee24614b38627b031d30ae26978d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8624d5eef2fae7f58a3863d6273a99e6b5fee24614b38627b031d30ae26978d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8624d5eef2fae7f58a3863d6273a99e6b5fee24614b38627b031d30ae26978d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8624d5eef2fae7f58a3863d6273a99e6b5fee24614b38627b031d30ae26978d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8624d5eef2fae7f58a3863d6273a99e6b5fee24614b38627b031d30ae26978d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 19:16:09 compute-0 podman[263857]: 2026-01-23 19:16:09.395337381 +0000 UTC m=+0.026887767 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:16:09 compute-0 podman[263857]: 2026-01-23 19:16:09.508886674 +0000 UTC m=+0.140437060 container init e596286b43fa6b58fbbf37d066779552c8ed27c8049d928e046ffb8aa72eb032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:16:09 compute-0 podman[263857]: 2026-01-23 19:16:09.518523049 +0000 UTC m=+0.150073415 container start e596286b43fa6b58fbbf37d066779552c8ed27c8049d928e046ffb8aa72eb032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_williamson, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 19:16:09 compute-0 podman[263857]: 2026-01-23 19:16:09.525241724 +0000 UTC m=+0.156792120 container attach e596286b43fa6b58fbbf37d066779552c8ed27c8049d928e046ffb8aa72eb032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_williamson, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 19:16:09 compute-0 wizardly_williamson[263873]: --> passed data devices: 0 physical, 3 LVM
Jan 23 19:16:09 compute-0 wizardly_williamson[263873]: --> All data devices are unavailable
Jan 23 19:16:09 compute-0 systemd[1]: libpod-e596286b43fa6b58fbbf37d066779552c8ed27c8049d928e046ffb8aa72eb032.scope: Deactivated successfully.
Jan 23 19:16:09 compute-0 podman[263857]: 2026-01-23 19:16:09.974397172 +0000 UTC m=+0.605947538 container died e596286b43fa6b58fbbf37d066779552c8ed27c8049d928e046ffb8aa72eb032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 23 19:16:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8624d5eef2fae7f58a3863d6273a99e6b5fee24614b38627b031d30ae26978d-merged.mount: Deactivated successfully.
Jan 23 19:16:10 compute-0 podman[263857]: 2026-01-23 19:16:10.024412633 +0000 UTC m=+0.655962999 container remove e596286b43fa6b58fbbf37d066779552c8ed27c8049d928e046ffb8aa72eb032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_williamson, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 19:16:10 compute-0 systemd[1]: libpod-conmon-e596286b43fa6b58fbbf37d066779552c8ed27c8049d928e046ffb8aa72eb032.scope: Deactivated successfully.
Jan 23 19:16:10 compute-0 sudo[263780]: pam_unix(sudo:session): session closed for user root
Jan 23 19:16:10 compute-0 sudo[263904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:16:10 compute-0 sudo[263904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:16:10 compute-0 sudo[263904]: pam_unix(sudo:session): session closed for user root
Jan 23 19:16:10 compute-0 sudo[263929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 19:16:10 compute-0 sudo[263929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:16:10 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:16:10 compute-0 podman[263965]: 2026-01-23 19:16:10.508844402 +0000 UTC m=+0.049497439 container create e2fe3844527e9066e6ffaf6f8f42fd506523f277a2f4f2a9fdc034f2f5db9d36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_edison, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 19:16:10 compute-0 systemd[1]: Started libpod-conmon-e2fe3844527e9066e6ffaf6f8f42fd506523f277a2f4f2a9fdc034f2f5db9d36.scope.
Jan 23 19:16:10 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:16:10 compute-0 podman[263965]: 2026-01-23 19:16:10.490267729 +0000 UTC m=+0.030920796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:16:10 compute-0 podman[263965]: 2026-01-23 19:16:10.594363971 +0000 UTC m=+0.135017008 container init e2fe3844527e9066e6ffaf6f8f42fd506523f277a2f4f2a9fdc034f2f5db9d36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_edison, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 23 19:16:10 compute-0 podman[263965]: 2026-01-23 19:16:10.603358741 +0000 UTC m=+0.144011778 container start e2fe3844527e9066e6ffaf6f8f42fd506523f277a2f4f2a9fdc034f2f5db9d36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_edison, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:16:10 compute-0 admiring_edison[263981]: 167 167
Jan 23 19:16:10 compute-0 systemd[1]: libpod-e2fe3844527e9066e6ffaf6f8f42fd506523f277a2f4f2a9fdc034f2f5db9d36.scope: Deactivated successfully.
Jan 23 19:16:10 compute-0 podman[263965]: 2026-01-23 19:16:10.610993617 +0000 UTC m=+0.151646674 container attach e2fe3844527e9066e6ffaf6f8f42fd506523f277a2f4f2a9fdc034f2f5db9d36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:16:10 compute-0 podman[263965]: 2026-01-23 19:16:10.612646747 +0000 UTC m=+0.153299794 container died e2fe3844527e9066e6ffaf6f8f42fd506523f277a2f4f2a9fdc034f2f5db9d36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_edison, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 19:16:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4583fa5038a4389224f16a4ecb0c9605732dfcf99bacbaa5153a419e9915f11-merged.mount: Deactivated successfully.
Jan 23 19:16:10 compute-0 ceph-mon[75097]: pgmap v1405: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:10 compute-0 podman[263965]: 2026-01-23 19:16:10.679789056 +0000 UTC m=+0.220442093 container remove e2fe3844527e9066e6ffaf6f8f42fd506523f277a2f4f2a9fdc034f2f5db9d36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_edison, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 23 19:16:10 compute-0 systemd[1]: libpod-conmon-e2fe3844527e9066e6ffaf6f8f42fd506523f277a2f4f2a9fdc034f2f5db9d36.scope: Deactivated successfully.
Jan 23 19:16:10 compute-0 podman[264007]: 2026-01-23 19:16:10.850757671 +0000 UTC m=+0.047387147 container create 9e80b4e185a79a7a8a3f748a375fd65795ddfa2d760fb3a52b02672a984c193c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_yalow, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 23 19:16:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:10 compute-0 systemd[1]: Started libpod-conmon-9e80b4e185a79a7a8a3f748a375fd65795ddfa2d760fb3a52b02672a984c193c.scope.
Jan 23 19:16:10 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8afb11164ddedbd631f8f22de06efd31ceac066b63cc3ac784c405840f1de97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8afb11164ddedbd631f8f22de06efd31ceac066b63cc3ac784c405840f1de97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8afb11164ddedbd631f8f22de06efd31ceac066b63cc3ac784c405840f1de97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8afb11164ddedbd631f8f22de06efd31ceac066b63cc3ac784c405840f1de97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:16:10 compute-0 podman[264007]: 2026-01-23 19:16:10.925836455 +0000 UTC m=+0.122465851 container init 9e80b4e185a79a7a8a3f748a375fd65795ddfa2d760fb3a52b02672a984c193c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 23 19:16:10 compute-0 podman[264007]: 2026-01-23 19:16:10.830090058 +0000 UTC m=+0.026719464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:16:10 compute-0 podman[264007]: 2026-01-23 19:16:10.935271286 +0000 UTC m=+0.131900662 container start 9e80b4e185a79a7a8a3f748a375fd65795ddfa2d760fb3a52b02672a984c193c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:16:10 compute-0 podman[264007]: 2026-01-23 19:16:10.9391208 +0000 UTC m=+0.135750176 container attach 9e80b4e185a79a7a8a3f748a375fd65795ddfa2d760fb3a52b02672a984c193c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_yalow, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 19:16:11 compute-0 friendly_yalow[264024]: {
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:     "0": [
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:         {
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "devices": [
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "/dev/loop3"
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             ],
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "lv_name": "ceph_lv0",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "lv_size": "21470642176",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "name": "ceph_lv0",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "tags": {
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.cluster_name": "ceph",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.crush_device_class": "",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.encrypted": "0",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.objectstore": "bluestore",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.osd_id": "0",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.type": "block",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.vdo": "0",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.with_tpm": "0"
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             },
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "type": "block",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "vg_name": "ceph_vg0"
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:         }
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:     ],
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:     "1": [
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:         {
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "devices": [
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "/dev/loop4"
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             ],
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "lv_name": "ceph_lv1",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "lv_size": "21470642176",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "name": "ceph_lv1",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "tags": {
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.cluster_name": "ceph",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.crush_device_class": "",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.encrypted": "0",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.objectstore": "bluestore",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.osd_id": "1",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.type": "block",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.vdo": "0",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.with_tpm": "0"
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             },
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "type": "block",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "vg_name": "ceph_vg1"
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:         }
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:     ],
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:     "2": [
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:         {
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "devices": [
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "/dev/loop5"
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             ],
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "lv_name": "ceph_lv2",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "lv_size": "21470642176",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "name": "ceph_lv2",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "tags": {
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.cluster_name": "ceph",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.crush_device_class": "",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.encrypted": "0",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.objectstore": "bluestore",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.osd_id": "2",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.type": "block",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.vdo": "0",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:                 "ceph.with_tpm": "0"
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             },
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "type": "block",
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:             "vg_name": "ceph_vg2"
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:         }
Jan 23 19:16:11 compute-0 friendly_yalow[264024]:     ]
Jan 23 19:16:11 compute-0 friendly_yalow[264024]: }
Jan 23 19:16:11 compute-0 systemd[1]: libpod-9e80b4e185a79a7a8a3f748a375fd65795ddfa2d760fb3a52b02672a984c193c.scope: Deactivated successfully.
Jan 23 19:16:11 compute-0 podman[264007]: 2026-01-23 19:16:11.236167154 +0000 UTC m=+0.432796560 container died 9e80b4e185a79a7a8a3f748a375fd65795ddfa2d760fb3a52b02672a984c193c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 23 19:16:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8afb11164ddedbd631f8f22de06efd31ceac066b63cc3ac784c405840f1de97-merged.mount: Deactivated successfully.
Jan 23 19:16:11 compute-0 nova_compute[240119]: 2026-01-23 19:16:11.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:16:11 compute-0 podman[264007]: 2026-01-23 19:16:11.733400585 +0000 UTC m=+0.930029961 container remove 9e80b4e185a79a7a8a3f748a375fd65795ddfa2d760fb3a52b02672a984c193c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_yalow, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:16:11 compute-0 sudo[263929]: pam_unix(sudo:session): session closed for user root
Jan 23 19:16:11 compute-0 systemd[1]: libpod-conmon-9e80b4e185a79a7a8a3f748a375fd65795ddfa2d760fb3a52b02672a984c193c.scope: Deactivated successfully.
Jan 23 19:16:11 compute-0 sudo[264045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:16:11 compute-0 sudo[264045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:16:11 compute-0 sudo[264045]: pam_unix(sudo:session): session closed for user root
Jan 23 19:16:11 compute-0 sudo[264070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 19:16:11 compute-0 sudo[264070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:16:12 compute-0 podman[264108]: 2026-01-23 19:16:12.152827277 +0000 UTC m=+0.024974051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:16:12 compute-0 nova_compute[240119]: 2026-01-23 19:16:12.347 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:16:12 compute-0 nova_compute[240119]: 2026-01-23 19:16:12.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:16:12 compute-0 nova_compute[240119]: 2026-01-23 19:16:12.377 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:16:12 compute-0 nova_compute[240119]: 2026-01-23 19:16:12.377 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:16:12 compute-0 nova_compute[240119]: 2026-01-23 19:16:12.377 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:16:12 compute-0 nova_compute[240119]: 2026-01-23 19:16:12.378 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:16:12 compute-0 nova_compute[240119]: 2026-01-23 19:16:12.378 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:16:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:13 compute-0 ceph-mon[75097]: pgmap v1406: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:13 compute-0 podman[264108]: 2026-01-23 19:16:13.434220888 +0000 UTC m=+1.306367622 container create b352bfceb5f270828d6f6c19afc7d0eb35e1c3d2f2a5476bebde584752408343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 23 19:16:13 compute-0 systemd[1]: Started libpod-conmon-b352bfceb5f270828d6f6c19afc7d0eb35e1c3d2f2a5476bebde584752408343.scope.
Jan 23 19:16:13 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:16:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:16:13.596 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:16:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:16:13.597 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:16:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:16:13.597 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:16:13 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:16:13 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4250394165' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:16:13 compute-0 nova_compute[240119]: 2026-01-23 19:16:13.867 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:16:13 compute-0 podman[264108]: 2026-01-23 19:16:13.9109878 +0000 UTC m=+1.783134554 container init b352bfceb5f270828d6f6c19afc7d0eb35e1c3d2f2a5476bebde584752408343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 19:16:13 compute-0 podman[264108]: 2026-01-23 19:16:13.92082002 +0000 UTC m=+1.792966754 container start b352bfceb5f270828d6f6c19afc7d0eb35e1c3d2f2a5476bebde584752408343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 23 19:16:13 compute-0 brave_murdock[264144]: 167 167
Jan 23 19:16:13 compute-0 systemd[1]: libpod-b352bfceb5f270828d6f6c19afc7d0eb35e1c3d2f2a5476bebde584752408343.scope: Deactivated successfully.
Jan 23 19:16:14 compute-0 nova_compute[240119]: 2026-01-23 19:16:14.054 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:16:14 compute-0 nova_compute[240119]: 2026-01-23 19:16:14.055 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4900MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:16:14 compute-0 nova_compute[240119]: 2026-01-23 19:16:14.055 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:16:14 compute-0 nova_compute[240119]: 2026-01-23 19:16:14.056 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:16:14 compute-0 nova_compute[240119]: 2026-01-23 19:16:14.116 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:16:14 compute-0 nova_compute[240119]: 2026-01-23 19:16:14.116 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:16:14 compute-0 nova_compute[240119]: 2026-01-23 19:16:14.136 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:16:14 compute-0 podman[264108]: 2026-01-23 19:16:14.298312338 +0000 UTC m=+2.170459122 container attach b352bfceb5f270828d6f6c19afc7d0eb35e1c3d2f2a5476bebde584752408343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 23 19:16:14 compute-0 podman[264108]: 2026-01-23 19:16:14.298856881 +0000 UTC m=+2.171003615 container died b352bfceb5f270828d6f6c19afc7d0eb35e1c3d2f2a5476bebde584752408343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:16:14 compute-0 ceph-mon[75097]: pgmap v1407: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:14 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4250394165' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:16:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:16:14 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2384139628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:16:14 compute-0 nova_compute[240119]: 2026-01-23 19:16:14.703 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:16:14 compute-0 nova_compute[240119]: 2026-01-23 19:16:14.710 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:16:14 compute-0 nova_compute[240119]: 2026-01-23 19:16:14.727 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:16:14 compute-0 nova_compute[240119]: 2026-01-23 19:16:14.729 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:16:14 compute-0 nova_compute[240119]: 2026-01-23 19:16:14.729 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:16:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fc79795df41e776bc8c9a473f82a552991c5624e9c1dbbddd297715d04f4c09-merged.mount: Deactivated successfully.
Jan 23 19:16:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:15 compute-0 podman[264108]: 2026-01-23 19:16:15.099232606 +0000 UTC m=+2.971379350 container remove b352bfceb5f270828d6f6c19afc7d0eb35e1c3d2f2a5476bebde584752408343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:16:15 compute-0 systemd[1]: libpod-conmon-b352bfceb5f270828d6f6c19afc7d0eb35e1c3d2f2a5476bebde584752408343.scope: Deactivated successfully.
Jan 23 19:16:15 compute-0 podman[264193]: 2026-01-23 19:16:15.280225716 +0000 UTC m=+0.045889322 container create f5d937c081cd1a2d42b2ca165ee00b7f39366f28e28864424a9ebb5367162c19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swirles, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 19:16:15 compute-0 systemd[1]: Started libpod-conmon-f5d937c081cd1a2d42b2ca165ee00b7f39366f28e28864424a9ebb5367162c19.scope.
Jan 23 19:16:15 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:16:15 compute-0 podman[264193]: 2026-01-23 19:16:15.261226391 +0000 UTC m=+0.026890017 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308f2205531e3673850c3b2b5b19163ce4a20a268f492b46789cc1b0d3543131/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308f2205531e3673850c3b2b5b19163ce4a20a268f492b46789cc1b0d3543131/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308f2205531e3673850c3b2b5b19163ce4a20a268f492b46789cc1b0d3543131/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308f2205531e3673850c3b2b5b19163ce4a20a268f492b46789cc1b0d3543131/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:16:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:16:15 compute-0 podman[264193]: 2026-01-23 19:16:15.379674064 +0000 UTC m=+0.145337700 container init f5d937c081cd1a2d42b2ca165ee00b7f39366f28e28864424a9ebb5367162c19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swirles, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:16:15 compute-0 podman[264193]: 2026-01-23 19:16:15.386795828 +0000 UTC m=+0.152459434 container start f5d937c081cd1a2d42b2ca165ee00b7f39366f28e28864424a9ebb5367162c19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 19:16:15 compute-0 podman[264193]: 2026-01-23 19:16:15.559229189 +0000 UTC m=+0.324892795 container attach f5d937c081cd1a2d42b2ca165ee00b7f39366f28e28864424a9ebb5367162c19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 23 19:16:15 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2384139628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:16:15 compute-0 nova_compute[240119]: 2026-01-23 19:16:15.730 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:16:15 compute-0 nova_compute[240119]: 2026-01-23 19:16:15.733 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:16:15 compute-0 nova_compute[240119]: 2026-01-23 19:16:15.733 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:16:15 compute-0 nova_compute[240119]: 2026-01-23 19:16:15.733 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:16:16 compute-0 lvm[264288]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:16:16 compute-0 lvm[264287]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:16:16 compute-0 lvm[264287]: VG ceph_vg0 finished
Jan 23 19:16:16 compute-0 lvm[264288]: VG ceph_vg1 finished
Jan 23 19:16:16 compute-0 lvm[264290]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:16:16 compute-0 lvm[264290]: VG ceph_vg2 finished
Jan 23 19:16:16 compute-0 pensive_swirles[264209]: {}
Jan 23 19:16:16 compute-0 systemd[1]: libpod-f5d937c081cd1a2d42b2ca165ee00b7f39366f28e28864424a9ebb5367162c19.scope: Deactivated successfully.
Jan 23 19:16:16 compute-0 systemd[1]: libpod-f5d937c081cd1a2d42b2ca165ee00b7f39366f28e28864424a9ebb5367162c19.scope: Consumed 1.445s CPU time.
Jan 23 19:16:16 compute-0 podman[264193]: 2026-01-23 19:16:16.263068516 +0000 UTC m=+1.028732152 container died f5d937c081cd1a2d42b2ca165ee00b7f39366f28e28864424a9ebb5367162c19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swirles, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 23 19:16:16 compute-0 nova_compute[240119]: 2026-01-23 19:16:16.351 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:16:16 compute-0 nova_compute[240119]: 2026-01-23 19:16:16.352 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:16:16 compute-0 nova_compute[240119]: 2026-01-23 19:16:16.352 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:16:16 compute-0 nova_compute[240119]: 2026-01-23 19:16:16.393 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:16:16 compute-0 ceph-mon[75097]: pgmap v1408: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-308f2205531e3673850c3b2b5b19163ce4a20a268f492b46789cc1b0d3543131-merged.mount: Deactivated successfully.
Jan 23 19:16:17 compute-0 podman[264193]: 2026-01-23 19:16:17.089742163 +0000 UTC m=+1.855405769 container remove f5d937c081cd1a2d42b2ca165ee00b7f39366f28e28864424a9ebb5367162c19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swirles, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:16:17 compute-0 systemd[1]: libpod-conmon-f5d937c081cd1a2d42b2ca165ee00b7f39366f28e28864424a9ebb5367162c19.scope: Deactivated successfully.
Jan 23 19:16:17 compute-0 sudo[264070]: pam_unix(sudo:session): session closed for user root
Jan 23 19:16:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:16:17 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:16:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:16:17 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:16:17 compute-0 sudo[264305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 19:16:17 compute-0 sudo[264305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:16:17 compute-0 sudo[264305]: pam_unix(sudo:session): session closed for user root
Jan 23 19:16:18 compute-0 ceph-mon[75097]: pgmap v1409: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:16:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:16:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:19 compute-0 nova_compute[240119]: 2026-01-23 19:16:19.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:16:20 compute-0 ceph-mon[75097]: pgmap v1410: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:20 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:16:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:21 compute-0 nova_compute[240119]: 2026-01-23 19:16:21.343 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:16:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:22 compute-0 ceph-mon[75097]: pgmap v1411: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:24 compute-0 ceph-mon[75097]: pgmap v1412: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:25 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:16:26 compute-0 podman[264331]: 2026-01-23 19:16:26.009411893 +0000 UTC m=+0.087276637 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 23 19:16:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:27 compute-0 ceph-mon[75097]: pgmap v1413: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:28 compute-0 ceph-mon[75097]: pgmap v1414: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:16:28
Jan 23 19:16:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:16:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:16:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'images', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'vms', 'default.rgw.log', 'default.rgw.control']
Jan 23 19:16:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:16:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:16:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:16:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:16:30 compute-0 ceph-mon[75097]: pgmap v1415: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:16:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:16:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:16:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:16:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:16:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:16:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:16:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:16:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:16:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:16:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:16:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:16:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:16:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:16:32 compute-0 ceph-mon[75097]: pgmap v1416: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:32 compute-0 podman[264359]: 2026-01-23 19:16:32.967465912 +0000 UTC m=+0.053047203 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 19:16:34 compute-0 ceph-mon[75097]: pgmap v1417: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Jan 23 19:16:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:16:36 compute-0 ceph-mon[75097]: pgmap v1418: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Jan 23 19:16:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Jan 23 19:16:38 compute-0 ceph-mon[75097]: pgmap v1419: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Jan 23 19:16:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Jan 23 19:16:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659032622064907 of space, bias 1.0, pg target 0.1997709786619472 quantized to 32 (current 32)
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005188356691283345 of space, bias 4.0, pg target 0.6226028029540014 quantized to 16 (current 16)
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:16:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 19:16:41 compute-0 ceph-mon[75097]: pgmap v1420: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Jan 23 19:16:42 compute-0 ceph-mon[75097]: pgmap v1421: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 19:16:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 19:16:44 compute-0 ceph-mon[75097]: pgmap v1422: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 19:16:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 19:16:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:16:46 compute-0 ceph-mon[75097]: pgmap v1423: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 23 19:16:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Jan 23 19:16:47 compute-0 ceph-mon[75097]: pgmap v1424: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Jan 23 19:16:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Jan 23 19:16:50 compute-0 ceph-mon[75097]: pgmap v1425: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Jan 23 19:16:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:16:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Jan 23 19:16:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:52 compute-0 ceph-mon[75097]: pgmap v1426: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Jan 23 19:16:54 compute-0 ceph-mon[75097]: pgmap v1427: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:55 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:16:56 compute-0 ceph-mon[75097]: pgmap v1428: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:16:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3468252400' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:16:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:16:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3468252400' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:16:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:57 compute-0 podman[264380]: 2026-01-23 19:16:57.002890164 +0000 UTC m=+0.091152622 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 19:16:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/3468252400' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:16:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/3468252400' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:16:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:16:59 compute-0 ceph-mon[75097]: pgmap v1429: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:17:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:17:00 compute-0 ceph-mon[75097]: pgmap v1430: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:17:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:17:00 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:17:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:17:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:17:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:01 compute-0 anacron[7482]: Job `cron.monthly' started
Jan 23 19:17:01 compute-0 anacron[7482]: Job `cron.monthly' terminated
Jan 23 19:17:01 compute-0 anacron[7482]: Normal exit (3 jobs run)
Jan 23 19:17:02 compute-0 ceph-mon[75097]: pgmap v1431: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:03 compute-0 podman[264409]: 2026-01-23 19:17:03.981994276 +0000 UTC m=+0.060452304 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 23 19:17:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:05 compute-0 ceph-mon[75097]: pgmap v1432: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:17:06 compute-0 ceph-mon[75097]: pgmap v1433: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:07 compute-0 nova_compute[240119]: 2026-01-23 19:17:07.444 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:17:08 compute-0 ceph-mon[75097]: pgmap v1434: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:17:11 compute-0 ceph-mon[75097]: pgmap v1435: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:12 compute-0 nova_compute[240119]: 2026-01-23 19:17:12.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:17:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:13 compute-0 ceph-mon[75097]: pgmap v1436: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:13 compute-0 nova_compute[240119]: 2026-01-23 19:17:13.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:17:13 compute-0 nova_compute[240119]: 2026-01-23 19:17:13.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:17:13 compute-0 nova_compute[240119]: 2026-01-23 19:17:13.350 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:17:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:17:13.597 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:17:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:17:13.597 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:17:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:17:13.597 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:17:14 compute-0 nova_compute[240119]: 2026-01-23 19:17:14.000 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:17:14 compute-0 nova_compute[240119]: 2026-01-23 19:17:14.000 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:17:14 compute-0 nova_compute[240119]: 2026-01-23 19:17:14.001 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:17:14 compute-0 nova_compute[240119]: 2026-01-23 19:17:14.001 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:17:14 compute-0 nova_compute[240119]: 2026-01-23 19:17:14.002 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:17:14 compute-0 ceph-mon[75097]: pgmap v1437: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:17:14 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2794891895' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:17:14 compute-0 nova_compute[240119]: 2026-01-23 19:17:14.579 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:17:14 compute-0 nova_compute[240119]: 2026-01-23 19:17:14.749 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:17:14 compute-0 nova_compute[240119]: 2026-01-23 19:17:14.751 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4934MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:17:14 compute-0 nova_compute[240119]: 2026-01-23 19:17:14.751 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:17:14 compute-0 nova_compute[240119]: 2026-01-23 19:17:14.751 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:17:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:14 compute-0 nova_compute[240119]: 2026-01-23 19:17:14.932 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:17:14 compute-0 nova_compute[240119]: 2026-01-23 19:17:14.933 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:17:14 compute-0 nova_compute[240119]: 2026-01-23 19:17:14.948 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:17:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:17:15 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1373843979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:17:15 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2794891895' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:17:15 compute-0 nova_compute[240119]: 2026-01-23 19:17:15.568 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:17:15 compute-0 nova_compute[240119]: 2026-01-23 19:17:15.573 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:17:15 compute-0 nova_compute[240119]: 2026-01-23 19:17:15.907 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:17:15 compute-0 nova_compute[240119]: 2026-01-23 19:17:15.910 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:17:15 compute-0 nova_compute[240119]: 2026-01-23 19:17:15.910 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:17:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:17:16 compute-0 ceph-mon[75097]: pgmap v1438: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:16 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1373843979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:17:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:16 compute-0 nova_compute[240119]: 2026-01-23 19:17:16.909 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:17:16 compute-0 nova_compute[240119]: 2026-01-23 19:17:16.910 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:17:16 compute-0 nova_compute[240119]: 2026-01-23 19:17:16.910 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:17:17 compute-0 sudo[264473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:17:17 compute-0 sudo[264473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:17:17 compute-0 sudo[264473]: pam_unix(sudo:session): session closed for user root
Jan 23 19:17:17 compute-0 nova_compute[240119]: 2026-01-23 19:17:17.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:17:17 compute-0 nova_compute[240119]: 2026-01-23 19:17:17.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:17:17 compute-0 nova_compute[240119]: 2026-01-23 19:17:17.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:17:17 compute-0 sudo[264498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 19:17:17 compute-0 sudo[264498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:17:17 compute-0 nova_compute[240119]: 2026-01-23 19:17:17.362 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:17:17 compute-0 sudo[264498]: pam_unix(sudo:session): session closed for user root
Jan 23 19:17:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 23 19:17:17 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 23 19:17:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:17:17 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:17:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 19:17:17 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:17:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 19:17:17 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:17:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 19:17:17 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:17:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 19:17:17 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:17:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:17:17 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:17:17 compute-0 sudo[264554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:17:17 compute-0 sudo[264554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:17:17 compute-0 sudo[264554]: pam_unix(sudo:session): session closed for user root
Jan 23 19:17:18 compute-0 sudo[264579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 19:17:18 compute-0 sudo[264579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:17:18 compute-0 podman[264617]: 2026-01-23 19:17:18.289216321 +0000 UTC m=+0.041607885 container create b5a902d3c2c6c100224f78eec132592d110f7c1c5e51d12e7c8366dbeb7a0f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:17:18 compute-0 podman[264617]: 2026-01-23 19:17:18.270604987 +0000 UTC m=+0.022996571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:17:18 compute-0 systemd[1]: Started libpod-conmon-b5a902d3c2c6c100224f78eec132592d110f7c1c5e51d12e7c8366dbeb7a0f02.scope.
Jan 23 19:17:18 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:17:18 compute-0 podman[264617]: 2026-01-23 19:17:18.451887403 +0000 UTC m=+0.204279007 container init b5a902d3c2c6c100224f78eec132592d110f7c1c5e51d12e7c8366dbeb7a0f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_faraday, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 19:17:18 compute-0 podman[264617]: 2026-01-23 19:17:18.465807572 +0000 UTC m=+0.218199176 container start b5a902d3c2c6c100224f78eec132592d110f7c1c5e51d12e7c8366dbeb7a0f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:17:18 compute-0 boring_faraday[264634]: 167 167
Jan 23 19:17:18 compute-0 podman[264617]: 2026-01-23 19:17:18.470842015 +0000 UTC m=+0.223233609 container attach b5a902d3c2c6c100224f78eec132592d110f7c1c5e51d12e7c8366dbeb7a0f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 23 19:17:18 compute-0 systemd[1]: libpod-b5a902d3c2c6c100224f78eec132592d110f7c1c5e51d12e7c8366dbeb7a0f02.scope: Deactivated successfully.
Jan 23 19:17:18 compute-0 podman[264617]: 2026-01-23 19:17:18.472059695 +0000 UTC m=+0.224451259 container died b5a902d3c2c6c100224f78eec132592d110f7c1c5e51d12e7c8366dbeb7a0f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:17:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-890d4ff9fc80d9d435cf44658f83fc4bb9684255416b615a12376f2ef4f0b280-merged.mount: Deactivated successfully.
Jan 23 19:17:18 compute-0 podman[264617]: 2026-01-23 19:17:18.621289259 +0000 UTC m=+0.373680853 container remove b5a902d3c2c6c100224f78eec132592d110f7c1c5e51d12e7c8366dbeb7a0f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:17:18 compute-0 systemd[1]: libpod-conmon-b5a902d3c2c6c100224f78eec132592d110f7c1c5e51d12e7c8366dbeb7a0f02.scope: Deactivated successfully.
Jan 23 19:17:18 compute-0 ceph-mon[75097]: pgmap v1439: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 23 19:17:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:17:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:17:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:17:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:17:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:17:18 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:17:18 compute-0 podman[264660]: 2026-01-23 19:17:18.859458571 +0000 UTC m=+0.089697676 container create 4b571d1605caea3113a6381283e0c6e569d8c572c553fe41fb8965415215affa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mclaren, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:17:18 compute-0 podman[264660]: 2026-01-23 19:17:18.794702653 +0000 UTC m=+0.024941798 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:17:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:18 compute-0 systemd[1]: Started libpod-conmon-4b571d1605caea3113a6381283e0c6e569d8c572c553fe41fb8965415215affa.scope.
Jan 23 19:17:18 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:17:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bed927b1671e3fefbe1dd632708c7691d558442abc6814a77fd7e7a02b0ad6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:17:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bed927b1671e3fefbe1dd632708c7691d558442abc6814a77fd7e7a02b0ad6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:17:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bed927b1671e3fefbe1dd632708c7691d558442abc6814a77fd7e7a02b0ad6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:17:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bed927b1671e3fefbe1dd632708c7691d558442abc6814a77fd7e7a02b0ad6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:17:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bed927b1671e3fefbe1dd632708c7691d558442abc6814a77fd7e7a02b0ad6b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 19:17:19 compute-0 podman[264660]: 2026-01-23 19:17:19.286895923 +0000 UTC m=+0.517135118 container init 4b571d1605caea3113a6381283e0c6e569d8c572c553fe41fb8965415215affa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 19:17:19 compute-0 podman[264660]: 2026-01-23 19:17:19.297276036 +0000 UTC m=+0.527515141 container start 4b571d1605caea3113a6381283e0c6e569d8c572c553fe41fb8965415215affa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:17:19 compute-0 podman[264660]: 2026-01-23 19:17:19.303834026 +0000 UTC m=+0.534073181 container attach 4b571d1605caea3113a6381283e0c6e569d8c572c553fe41fb8965415215affa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:17:19 compute-0 focused_mclaren[264677]: --> passed data devices: 0 physical, 3 LVM
Jan 23 19:17:19 compute-0 focused_mclaren[264677]: --> All data devices are unavailable
Jan 23 19:17:19 compute-0 systemd[1]: libpod-4b571d1605caea3113a6381283e0c6e569d8c572c553fe41fb8965415215affa.scope: Deactivated successfully.
Jan 23 19:17:19 compute-0 podman[264660]: 2026-01-23 19:17:19.792699744 +0000 UTC m=+1.022938849 container died 4b571d1605caea3113a6381283e0c6e569d8c572c553fe41fb8965415215affa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 23 19:17:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bed927b1671e3fefbe1dd632708c7691d558442abc6814a77fd7e7a02b0ad6b-merged.mount: Deactivated successfully.
Jan 23 19:17:20 compute-0 podman[264660]: 2026-01-23 19:17:20.009508985 +0000 UTC m=+1.239748090 container remove 4b571d1605caea3113a6381283e0c6e569d8c572c553fe41fb8965415215affa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 23 19:17:20 compute-0 sudo[264579]: pam_unix(sudo:session): session closed for user root
Jan 23 19:17:20 compute-0 systemd[1]: libpod-conmon-4b571d1605caea3113a6381283e0c6e569d8c572c553fe41fb8965415215affa.scope: Deactivated successfully.
Jan 23 19:17:20 compute-0 sudo[264707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:17:20 compute-0 sudo[264707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:17:20 compute-0 sudo[264707]: pam_unix(sudo:session): session closed for user root
Jan 23 19:17:20 compute-0 sudo[264732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 19:17:20 compute-0 sudo[264732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:17:20 compute-0 nova_compute[240119]: 2026-01-23 19:17:20.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:17:20 compute-0 podman[264769]: 2026-01-23 19:17:20.492394038 +0000 UTC m=+0.040043037 container create 6f0d4a42edf8720f9d84f4fc4722b11fa29604a5f7cb4da66e1bfa732c852038 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 19:17:20 compute-0 systemd[1]: Started libpod-conmon-6f0d4a42edf8720f9d84f4fc4722b11fa29604a5f7cb4da66e1bfa732c852038.scope.
Jan 23 19:17:20 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:17:20 compute-0 podman[264769]: 2026-01-23 19:17:20.473793704 +0000 UTC m=+0.021442703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:17:20 compute-0 podman[264769]: 2026-01-23 19:17:20.569709891 +0000 UTC m=+0.117358920 container init 6f0d4a42edf8720f9d84f4fc4722b11fa29604a5f7cb4da66e1bfa732c852038 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 23 19:17:20 compute-0 podman[264769]: 2026-01-23 19:17:20.575534963 +0000 UTC m=+0.123183962 container start 6f0d4a42edf8720f9d84f4fc4722b11fa29604a5f7cb4da66e1bfa732c852038 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bell, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 19:17:20 compute-0 podman[264769]: 2026-01-23 19:17:20.578970086 +0000 UTC m=+0.126619085 container attach 6f0d4a42edf8720f9d84f4fc4722b11fa29604a5f7cb4da66e1bfa732c852038 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 23 19:17:20 compute-0 lucid_bell[264785]: 167 167
Jan 23 19:17:20 compute-0 systemd[1]: libpod-6f0d4a42edf8720f9d84f4fc4722b11fa29604a5f7cb4da66e1bfa732c852038.scope: Deactivated successfully.
Jan 23 19:17:20 compute-0 podman[264769]: 2026-01-23 19:17:20.583022975 +0000 UTC m=+0.130672004 container died 6f0d4a42edf8720f9d84f4fc4722b11fa29604a5f7cb4da66e1bfa732c852038 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bell, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:17:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-8edd021ddf3d569cc83e5598e617ec946880c417c75035083895de0909b6c78f-merged.mount: Deactivated successfully.
Jan 23 19:17:20 compute-0 podman[264769]: 2026-01-23 19:17:20.623952432 +0000 UTC m=+0.171601431 container remove 6f0d4a42edf8720f9d84f4fc4722b11fa29604a5f7cb4da66e1bfa732c852038 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:17:20 compute-0 systemd[1]: libpod-conmon-6f0d4a42edf8720f9d84f4fc4722b11fa29604a5f7cb4da66e1bfa732c852038.scope: Deactivated successfully.
Jan 23 19:17:20 compute-0 podman[264811]: 2026-01-23 19:17:20.778974588 +0000 UTC m=+0.027527421 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:17:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:21 compute-0 ceph-mon[75097]: pgmap v1440: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:21 compute-0 podman[264811]: 2026-01-23 19:17:21.056941099 +0000 UTC m=+0.305493922 container create e276ccdabe6c2bd96115bd0689891637e039c19f1ba0303240cae927999d9cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_feistel, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 23 19:17:21 compute-0 systemd[1]: Started libpod-conmon-e276ccdabe6c2bd96115bd0689891637e039c19f1ba0303240cae927999d9cb4.scope.
Jan 23 19:17:21 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f3cd875a948d11659ded34703485fa5b0c98cc90fc22c0a354dad1ddb1f6f94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f3cd875a948d11659ded34703485fa5b0c98cc90fc22c0a354dad1ddb1f6f94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f3cd875a948d11659ded34703485fa5b0c98cc90fc22c0a354dad1ddb1f6f94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f3cd875a948d11659ded34703485fa5b0c98cc90fc22c0a354dad1ddb1f6f94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:17:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:17:21 compute-0 podman[264811]: 2026-01-23 19:17:21.624288429 +0000 UTC m=+0.872841342 container init e276ccdabe6c2bd96115bd0689891637e039c19f1ba0303240cae927999d9cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_feistel, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:17:21 compute-0 podman[264811]: 2026-01-23 19:17:21.64118425 +0000 UTC m=+0.889737103 container start e276ccdabe6c2bd96115bd0689891637e039c19f1ba0303240cae927999d9cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_feistel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:17:21 compute-0 podman[264811]: 2026-01-23 19:17:21.689254471 +0000 UTC m=+0.937807284 container attach e276ccdabe6c2bd96115bd0689891637e039c19f1ba0303240cae927999d9cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_feistel, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:17:21 compute-0 competent_feistel[264827]: {
Jan 23 19:17:21 compute-0 competent_feistel[264827]:     "0": [
Jan 23 19:17:21 compute-0 competent_feistel[264827]:         {
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "devices": [
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "/dev/loop3"
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             ],
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "lv_name": "ceph_lv0",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "lv_size": "21470642176",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "name": "ceph_lv0",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "tags": {
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.cluster_name": "ceph",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.crush_device_class": "",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.encrypted": "0",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.objectstore": "bluestore",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.osd_id": "0",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.type": "block",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.vdo": "0",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.with_tpm": "0"
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             },
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "type": "block",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "vg_name": "ceph_vg0"
Jan 23 19:17:21 compute-0 competent_feistel[264827]:         }
Jan 23 19:17:21 compute-0 competent_feistel[264827]:     ],
Jan 23 19:17:21 compute-0 competent_feistel[264827]:     "1": [
Jan 23 19:17:21 compute-0 competent_feistel[264827]:         {
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "devices": [
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "/dev/loop4"
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             ],
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "lv_name": "ceph_lv1",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "lv_size": "21470642176",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "name": "ceph_lv1",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "tags": {
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.cluster_name": "ceph",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.crush_device_class": "",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.encrypted": "0",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.objectstore": "bluestore",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.osd_id": "1",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.type": "block",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.vdo": "0",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.with_tpm": "0"
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             },
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "type": "block",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "vg_name": "ceph_vg1"
Jan 23 19:17:21 compute-0 competent_feistel[264827]:         }
Jan 23 19:17:21 compute-0 competent_feistel[264827]:     ],
Jan 23 19:17:21 compute-0 competent_feistel[264827]:     "2": [
Jan 23 19:17:21 compute-0 competent_feistel[264827]:         {
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "devices": [
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "/dev/loop5"
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             ],
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "lv_name": "ceph_lv2",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "lv_size": "21470642176",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "name": "ceph_lv2",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "tags": {
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.cluster_name": "ceph",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.crush_device_class": "",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.encrypted": "0",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.objectstore": "bluestore",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.osd_id": "2",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.type": "block",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.vdo": "0",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:                 "ceph.with_tpm": "0"
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             },
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "type": "block",
Jan 23 19:17:21 compute-0 competent_feistel[264827]:             "vg_name": "ceph_vg2"
Jan 23 19:17:21 compute-0 competent_feistel[264827]:         }
Jan 23 19:17:21 compute-0 competent_feistel[264827]:     ]
Jan 23 19:17:21 compute-0 competent_feistel[264827]: }
Jan 23 19:17:21 compute-0 systemd[1]: libpod-e276ccdabe6c2bd96115bd0689891637e039c19f1ba0303240cae927999d9cb4.scope: Deactivated successfully.
Jan 23 19:17:21 compute-0 podman[264811]: 2026-01-23 19:17:21.978887376 +0000 UTC m=+1.227440209 container died e276ccdabe6c2bd96115bd0689891637e039c19f1ba0303240cae927999d9cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 19:17:22 compute-0 ceph-mon[75097]: pgmap v1441: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f3cd875a948d11659ded34703485fa5b0c98cc90fc22c0a354dad1ddb1f6f94-merged.mount: Deactivated successfully.
Jan 23 19:17:22 compute-0 podman[264811]: 2026-01-23 19:17:22.404145376 +0000 UTC m=+1.652698229 container remove e276ccdabe6c2bd96115bd0689891637e039c19f1ba0303240cae927999d9cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 23 19:17:22 compute-0 sudo[264732]: pam_unix(sudo:session): session closed for user root
Jan 23 19:17:22 compute-0 systemd[1]: libpod-conmon-e276ccdabe6c2bd96115bd0689891637e039c19f1ba0303240cae927999d9cb4.scope: Deactivated successfully.
Jan 23 19:17:22 compute-0 sudo[264847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:17:22 compute-0 sudo[264847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:17:22 compute-0 sudo[264847]: pam_unix(sudo:session): session closed for user root
Jan 23 19:17:22 compute-0 sudo[264872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 19:17:22 compute-0 sudo[264872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:17:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:22 compute-0 podman[264910]: 2026-01-23 19:17:22.820367674 +0000 UTC m=+0.024263902 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:17:23 compute-0 podman[264910]: 2026-01-23 19:17:23.523196014 +0000 UTC m=+0.727092232 container create 1896c0a5de22328b7fa98825725401d1a1812dbb2b51f0c3fdecf9337b473b7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 19:17:24 compute-0 systemd[1]: Started libpod-conmon-1896c0a5de22328b7fa98825725401d1a1812dbb2b51f0c3fdecf9337b473b7a.scope.
Jan 23 19:17:24 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:17:24 compute-0 ceph-mon[75097]: pgmap v1442: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:24 compute-0 podman[264910]: 2026-01-23 19:17:24.849269106 +0000 UTC m=+2.053165334 container init 1896c0a5de22328b7fa98825725401d1a1812dbb2b51f0c3fdecf9337b473b7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldwasser, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 23 19:17:24 compute-0 podman[264910]: 2026-01-23 19:17:24.857039476 +0000 UTC m=+2.060935684 container start 1896c0a5de22328b7fa98825725401d1a1812dbb2b51f0c3fdecf9337b473b7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:17:24 compute-0 clever_goldwasser[264927]: 167 167
Jan 23 19:17:24 compute-0 systemd[1]: libpod-1896c0a5de22328b7fa98825725401d1a1812dbb2b51f0c3fdecf9337b473b7a.scope: Deactivated successfully.
Jan 23 19:17:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:25 compute-0 podman[264910]: 2026-01-23 19:17:25.056599166 +0000 UTC m=+2.260495404 container attach 1896c0a5de22328b7fa98825725401d1a1812dbb2b51f0c3fdecf9337b473b7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 23 19:17:25 compute-0 podman[264910]: 2026-01-23 19:17:25.057679383 +0000 UTC m=+2.261575611 container died 1896c0a5de22328b7fa98825725401d1a1812dbb2b51f0c3fdecf9337b473b7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:17:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-244be6b091a1122c49815fe2d39aa2c575810bcd089999f2232170e822740e74-merged.mount: Deactivated successfully.
Jan 23 19:17:25 compute-0 podman[264910]: 2026-01-23 19:17:25.27308225 +0000 UTC m=+2.476978458 container remove 1896c0a5de22328b7fa98825725401d1a1812dbb2b51f0c3fdecf9337b473b7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldwasser, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 23 19:17:25 compute-0 systemd[1]: libpod-conmon-1896c0a5de22328b7fa98825725401d1a1812dbb2b51f0c3fdecf9337b473b7a.scope: Deactivated successfully.
Jan 23 19:17:25 compute-0 podman[264951]: 2026-01-23 19:17:25.491794988 +0000 UTC m=+0.112491891 container create 7bdce935be75bdcef2899874443d97db961206871a625c89c18d47f16bee652f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 23 19:17:25 compute-0 podman[264951]: 2026-01-23 19:17:25.400596446 +0000 UTC m=+0.021293369 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:17:25 compute-0 systemd[1]: Started libpod-conmon-7bdce935be75bdcef2899874443d97db961206871a625c89c18d47f16bee652f.scope.
Jan 23 19:17:25 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e073142e97efcb449a58f8b01cc72ec54a1cb5c7064add019ac16b0fa95c34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e073142e97efcb449a58f8b01cc72ec54a1cb5c7064add019ac16b0fa95c34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e073142e97efcb449a58f8b01cc72ec54a1cb5c7064add019ac16b0fa95c34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e073142e97efcb449a58f8b01cc72ec54a1cb5c7064add019ac16b0fa95c34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:17:25 compute-0 podman[264951]: 2026-01-23 19:17:25.72955406 +0000 UTC m=+0.350250993 container init 7bdce935be75bdcef2899874443d97db961206871a625c89c18d47f16bee652f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:17:25 compute-0 podman[264951]: 2026-01-23 19:17:25.738814015 +0000 UTC m=+0.359510928 container start 7bdce935be75bdcef2899874443d97db961206871a625c89c18d47f16bee652f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:17:25 compute-0 podman[264951]: 2026-01-23 19:17:25.778550773 +0000 UTC m=+0.399247676 container attach 7bdce935be75bdcef2899874443d97db961206871a625c89c18d47f16bee652f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 19:17:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:17:26 compute-0 lvm[265046]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:17:26 compute-0 lvm[265046]: VG ceph_vg0 finished
Jan 23 19:17:26 compute-0 lvm[265047]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:17:26 compute-0 lvm[265047]: VG ceph_vg1 finished
Jan 23 19:17:26 compute-0 lvm[265049]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:17:26 compute-0 lvm[265049]: VG ceph_vg2 finished
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:17:26.559784) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195846559867, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1021, "num_deletes": 251, "total_data_size": 1515416, "memory_usage": 1544024, "flush_reason": "Manual Compaction"}
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 23 19:17:26 compute-0 dazzling_sutherland[264967]: {}
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195846598968, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 917187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31137, "largest_seqno": 32157, "table_properties": {"data_size": 913254, "index_size": 1585, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10516, "raw_average_key_size": 20, "raw_value_size": 904715, "raw_average_value_size": 1780, "num_data_blocks": 72, "num_entries": 508, "num_filter_entries": 508, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769195750, "oldest_key_time": 1769195750, "file_creation_time": 1769195846, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 39263 microseconds, and 4400 cpu microseconds.
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:17:26.599053) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 917187 bytes OK
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:17:26.599080) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:17:26.602548) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:17:26.602634) EVENT_LOG_v1 {"time_micros": 1769195846602623, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:17:26.602663) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1510586, prev total WAL file size 1510586, number of live WAL files 2.
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:17:26.603522) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303034' seq:72057594037927935, type:22 .. '6D6772737461740031323536' seq:0, type:0; will stop at (end)
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(895KB)], [65(10MB)]
Jan 23 19:17:26 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195846603624, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 11640672, "oldest_snapshot_seqno": -1}
Jan 23 19:17:26 compute-0 systemd[1]: libpod-7bdce935be75bdcef2899874443d97db961206871a625c89c18d47f16bee652f.scope: Deactivated successfully.
Jan 23 19:17:26 compute-0 systemd[1]: libpod-7bdce935be75bdcef2899874443d97db961206871a625c89c18d47f16bee652f.scope: Consumed 1.657s CPU time.
Jan 23 19:17:26 compute-0 podman[264951]: 2026-01-23 19:17:26.62699974 +0000 UTC m=+1.247696643 container died 7bdce935be75bdcef2899874443d97db961206871a625c89c18d47f16bee652f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:17:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:27 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6226 keys, 8817760 bytes, temperature: kUnknown
Jan 23 19:17:27 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195847031961, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 8817760, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8777362, "index_size": 23712, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 157054, "raw_average_key_size": 25, "raw_value_size": 8667013, "raw_average_value_size": 1392, "num_data_blocks": 968, "num_entries": 6226, "num_filter_entries": 6226, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 1769195846, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:17:27 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:17:27 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:17:27.032438) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 8817760 bytes
Jan 23 19:17:27 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:17:27.035376) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 27.2 rd, 20.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.2 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(22.3) write-amplify(9.6) OK, records in: 6698, records dropped: 472 output_compression: NoCompression
Jan 23 19:17:27 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:17:27.035406) EVENT_LOG_v1 {"time_micros": 1769195847035394, "job": 36, "event": "compaction_finished", "compaction_time_micros": 428429, "compaction_time_cpu_micros": 27543, "output_level": 6, "num_output_files": 1, "total_output_size": 8817760, "num_input_records": 6698, "num_output_records": 6226, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 19:17:27 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:17:27 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195847035847, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 23 19:17:27 compute-0 ceph-mon[75097]: pgmap v1443: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:27 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:17:27 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195847038415, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 23 19:17:27 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:17:26.603378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:17:27 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:17:27.038468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:17:27 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:17:27.038472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:17:27 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:17:27.038474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:17:27 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:17:27.038475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:17:27 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:17:27.038477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:17:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-10e073142e97efcb449a58f8b01cc72ec54a1cb5c7064add019ac16b0fa95c34-merged.mount: Deactivated successfully.
Jan 23 19:17:27 compute-0 podman[264951]: 2026-01-23 19:17:27.08639438 +0000 UTC m=+1.707091283 container remove 7bdce935be75bdcef2899874443d97db961206871a625c89c18d47f16bee652f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_sutherland, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:17:27 compute-0 sudo[264872]: pam_unix(sudo:session): session closed for user root
Jan 23 19:17:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:17:27 compute-0 systemd[1]: libpod-conmon-7bdce935be75bdcef2899874443d97db961206871a625c89c18d47f16bee652f.scope: Deactivated successfully.
Jan 23 19:17:27 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:17:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:17:27 compute-0 podman[265064]: 2026-01-23 19:17:27.242608146 +0000 UTC m=+0.157073828 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 23 19:17:27 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:17:27 compute-0 sudo[265090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 19:17:27 compute-0 sudo[265090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:17:27 compute-0 sudo[265090]: pam_unix(sudo:session): session closed for user root
Jan 23 19:17:28 compute-0 ceph-mon[75097]: pgmap v1444: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:17:28 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:17:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:17:28
Jan 23 19:17:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:17:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:17:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', 'backups', 'vms', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta']
Jan 23 19:17:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:17:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:17:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:17:30 compute-0 ceph-mon[75097]: pgmap v1445: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:17:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:17:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:17:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:17:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:17:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:17:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:17:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:17:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:17:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:17:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:17:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:17:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:17:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:17:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:17:32 compute-0 ceph-mon[75097]: pgmap v1446: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:33 compute-0 ceph-mon[75097]: pgmap v1447: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:34 compute-0 podman[265116]: 2026-01-23 19:17:34.984515979 +0000 UTC m=+0.063116068 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 23 19:17:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:17:36 compute-0 ceph-mon[75097]: pgmap v1448: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:38 compute-0 ceph-mon[75097]: pgmap v1449: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659032622064907 of space, bias 1.0, pg target 0.1997709786619472 quantized to 32 (current 32)
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005188356691283345 of space, bias 4.0, pg target 0.6226028029540014 quantized to 16 (current 16)
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:17:40 compute-0 ceph-mon[75097]: pgmap v1450: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:17:42 compute-0 ceph-mon[75097]: pgmap v1451: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:44 compute-0 ceph-mon[75097]: pgmap v1452: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:17:46 compute-0 ceph-mon[75097]: pgmap v1453: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:48 compute-0 ceph-mon[75097]: pgmap v1454: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:50 compute-0 ceph-mon[75097]: pgmap v1455: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:17:52 compute-0 ceph-mon[75097]: pgmap v1456: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:54 compute-0 ceph-mon[75097]: pgmap v1457: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:56 compute-0 ceph-mon[75097]: pgmap v1458: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:17:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:17:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4263063509' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:17:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:17:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4263063509' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:17:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/4263063509' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:17:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/4263063509' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:17:58 compute-0 podman[265136]: 2026-01-23 19:17:58.087444559 +0000 UTC m=+0.152050794 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 23 19:17:58 compute-0 ceph-mon[75097]: pgmap v1459: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:17:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:18:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:18:00 compute-0 ceph-mon[75097]: pgmap v1460: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:18:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:18:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:18:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:18:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:01 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:18:02 compute-0 ceph-mon[75097]: pgmap v1461: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:04 compute-0 ceph-mon[75097]: pgmap v1462: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:05 compute-0 podman[265162]: 2026-01-23 19:18:05.966940002 +0000 UTC m=+0.054578091 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 23 19:18:06 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:18:06 compute-0 ceph-mon[75097]: pgmap v1463: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:07 compute-0 nova_compute[240119]: 2026-01-23 19:18:07.343 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:18:08 compute-0 ceph-mon[75097]: pgmap v1464: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:10 compute-0 ceph-mon[75097]: pgmap v1465: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:11 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:18:12 compute-0 ceph-mon[75097]: pgmap v1466: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:13 compute-0 nova_compute[240119]: 2026-01-23 19:18:13.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:18:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:18:13.598 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:18:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:18:13.599 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:18:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:18:13.599 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:18:14 compute-0 nova_compute[240119]: 2026-01-23 19:18:14.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:18:14 compute-0 nova_compute[240119]: 2026-01-23 19:18:14.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:18:14 compute-0 nova_compute[240119]: 2026-01-23 19:18:14.381 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:18:14 compute-0 nova_compute[240119]: 2026-01-23 19:18:14.382 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:18:14 compute-0 nova_compute[240119]: 2026-01-23 19:18:14.382 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:18:14 compute-0 nova_compute[240119]: 2026-01-23 19:18:14.382 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:18:14 compute-0 nova_compute[240119]: 2026-01-23 19:18:14.383 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:18:14 compute-0 ceph-mon[75097]: pgmap v1467: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:18:14 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/313412278' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:18:14 compute-0 nova_compute[240119]: 2026-01-23 19:18:14.987 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:18:15 compute-0 nova_compute[240119]: 2026-01-23 19:18:15.175 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:18:15 compute-0 nova_compute[240119]: 2026-01-23 19:18:15.176 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4945MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:18:15 compute-0 nova_compute[240119]: 2026-01-23 19:18:15.177 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:18:15 compute-0 nova_compute[240119]: 2026-01-23 19:18:15.177 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:18:15 compute-0 nova_compute[240119]: 2026-01-23 19:18:15.246 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:18:15 compute-0 nova_compute[240119]: 2026-01-23 19:18:15.246 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:18:15 compute-0 nova_compute[240119]: 2026-01-23 19:18:15.262 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:18:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:18:15 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2829729189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:18:15 compute-0 nova_compute[240119]: 2026-01-23 19:18:15.851 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:18:15 compute-0 nova_compute[240119]: 2026-01-23 19:18:15.855 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:18:15 compute-0 nova_compute[240119]: 2026-01-23 19:18:15.869 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:18:15 compute-0 nova_compute[240119]: 2026-01-23 19:18:15.870 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:18:15 compute-0 nova_compute[240119]: 2026-01-23 19:18:15.870 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:18:16 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/313412278' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:18:16 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:18:16 compute-0 nova_compute[240119]: 2026-01-23 19:18:16.872 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:18:16 compute-0 nova_compute[240119]: 2026-01-23 19:18:16.873 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:18:16 compute-0 nova_compute[240119]: 2026-01-23 19:18:16.873 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:18:16 compute-0 nova_compute[240119]: 2026-01-23 19:18:16.874 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:18:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:17 compute-0 ceph-mon[75097]: pgmap v1468: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:17 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2829729189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:18:18 compute-0 ceph-mon[75097]: pgmap v1469: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:19 compute-0 nova_compute[240119]: 2026-01-23 19:18:19.351 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:18:19 compute-0 nova_compute[240119]: 2026-01-23 19:18:19.351 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:18:19 compute-0 nova_compute[240119]: 2026-01-23 19:18:19.351 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:18:19 compute-0 nova_compute[240119]: 2026-01-23 19:18:19.626 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:18:20 compute-0 ceph-mon[75097]: pgmap v1470: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:21 compute-0 nova_compute[240119]: 2026-01-23 19:18:21.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:18:21 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:18:22 compute-0 ceph-mon[75097]: pgmap v1471: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:23 compute-0 nova_compute[240119]: 2026-01-23 19:18:23.343 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:18:24 compute-0 ceph-mon[75097]: pgmap v1472: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:26 compute-0 ceph-mon[75097]: pgmap v1473: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:26 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:18:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:27 compute-0 sudo[265225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:18:27 compute-0 sudo[265225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:18:27 compute-0 sudo[265225]: pam_unix(sudo:session): session closed for user root
Jan 23 19:18:27 compute-0 sudo[265250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 23 19:18:27 compute-0 sudo[265250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:18:28 compute-0 ceph-mon[75097]: pgmap v1474: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:18:28
Jan 23 19:18:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:18:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:18:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'backups', 'images', 'default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.data']
Jan 23 19:18:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:18:29 compute-0 podman[265319]: 2026-01-23 19:18:29.107152187 +0000 UTC m=+1.150833804 container exec 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:18:29 compute-0 podman[265333]: 2026-01-23 19:18:29.378078797 +0000 UTC m=+0.448254541 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 19:18:29 compute-0 podman[265365]: 2026-01-23 19:18:29.403827174 +0000 UTC m=+0.165321648 container exec_died 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:18:29 compute-0 podman[265319]: 2026-01-23 19:18:29.409971743 +0000 UTC m=+1.453653380 container exec_died 3f8d5ffe26bfc97a6948691bd6fe91d74fbdd8d02113c35fd89c81e8021c0e77 (image=quay.io/ceph/ceph:v20, name=ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 19:18:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:18:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:18:30 compute-0 sudo[265250]: pam_unix(sudo:session): session closed for user root
Jan 23 19:18:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:18:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:18:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:18:30 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:18:30 compute-0 sudo[265531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:18:30 compute-0 sudo[265531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:18:30 compute-0 sudo[265531]: pam_unix(sudo:session): session closed for user root
Jan 23 19:18:30 compute-0 sudo[265556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 19:18:30 compute-0 sudo[265556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:18:30 compute-0 ceph-mon[75097]: pgmap v1475: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:30 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:18:30 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:18:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:18:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:18:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:18:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:18:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:18:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:18:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:18:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:18:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:18:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:18:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:18:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:18:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:18:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:18:31 compute-0 sudo[265556]: pam_unix(sudo:session): session closed for user root
Jan 23 19:18:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:18:31 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:18:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 19:18:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:18:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 19:18:31 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 19:18:31 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 19:18:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:18:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 19:18:31 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:18:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 19:18:31 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:18:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:18:31 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:18:31 compute-0 sudo[265612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:18:31 compute-0 sudo[265612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:18:31 compute-0 sudo[265612]: pam_unix(sudo:session): session closed for user root
Jan 23 19:18:31 compute-0 sudo[265637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 19:18:31 compute-0 sudo[265637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:18:31 compute-0 podman[265675]: 2026-01-23 19:18:31.669205275 +0000 UTC m=+0.065592609 container create b1c44c1c99dee4c21a27fc63d0c1c0817dcb182571ac7f38e5e4f0130ebcd70c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hoover, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 19:18:31 compute-0 systemd[1]: Started libpod-conmon-b1c44c1c99dee4c21a27fc63d0c1c0817dcb182571ac7f38e5e4f0130ebcd70c.scope.
Jan 23 19:18:31 compute-0 podman[265675]: 2026-01-23 19:18:31.628889983 +0000 UTC m=+0.025277337 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:18:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:18:31 compute-0 podman[265675]: 2026-01-23 19:18:31.761140685 +0000 UTC m=+0.157528069 container init b1c44c1c99dee4c21a27fc63d0c1c0817dcb182571ac7f38e5e4f0130ebcd70c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:18:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:18:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:18:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:18:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:18:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:18:31 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:18:31 compute-0 podman[265675]: 2026-01-23 19:18:31.772140432 +0000 UTC m=+0.168527766 container start b1c44c1c99dee4c21a27fc63d0c1c0817dcb182571ac7f38e5e4f0130ebcd70c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hoover, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 19:18:31 compute-0 podman[265675]: 2026-01-23 19:18:31.775224908 +0000 UTC m=+0.171612302 container attach b1c44c1c99dee4c21a27fc63d0c1c0817dcb182571ac7f38e5e4f0130ebcd70c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:18:31 compute-0 awesome_hoover[265691]: 167 167
Jan 23 19:18:31 compute-0 systemd[1]: libpod-b1c44c1c99dee4c21a27fc63d0c1c0817dcb182571ac7f38e5e4f0130ebcd70c.scope: Deactivated successfully.
Jan 23 19:18:31 compute-0 podman[265675]: 2026-01-23 19:18:31.780345512 +0000 UTC m=+0.176732876 container died b1c44c1c99dee4c21a27fc63d0c1c0817dcb182571ac7f38e5e4f0130ebcd70c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:18:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac34e429ea8a149894e3502fd3baa2e8b679ae27eea488f18ade39abaa7e0452-merged.mount: Deactivated successfully.
Jan 23 19:18:31 compute-0 podman[265675]: 2026-01-23 19:18:31.825674757 +0000 UTC m=+0.222062091 container remove b1c44c1c99dee4c21a27fc63d0c1c0817dcb182571ac7f38e5e4f0130ebcd70c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:18:31 compute-0 systemd[1]: libpod-conmon-b1c44c1c99dee4c21a27fc63d0c1c0817dcb182571ac7f38e5e4f0130ebcd70c.scope: Deactivated successfully.
Jan 23 19:18:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:18:32 compute-0 podman[265715]: 2026-01-23 19:18:32.0228795 +0000 UTC m=+0.059715555 container create 9fb001f9d8857d39cc074509f711e99b0979dd5247c46696d0b50e17cdbae334 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_rhodes, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 19:18:32 compute-0 systemd[1]: Started libpod-conmon-9fb001f9d8857d39cc074509f711e99b0979dd5247c46696d0b50e17cdbae334.scope.
Jan 23 19:18:32 compute-0 podman[265715]: 2026-01-23 19:18:31.992187763 +0000 UTC m=+0.029023848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:18:32 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ea470c30d527f3b461dcae0581bedca11e646d7e54a212e70ffd0827290d99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ea470c30d527f3b461dcae0581bedca11e646d7e54a212e70ffd0827290d99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ea470c30d527f3b461dcae0581bedca11e646d7e54a212e70ffd0827290d99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ea470c30d527f3b461dcae0581bedca11e646d7e54a212e70ffd0827290d99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ea470c30d527f3b461dcae0581bedca11e646d7e54a212e70ffd0827290d99/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 19:18:32 compute-0 podman[265715]: 2026-01-23 19:18:32.134445777 +0000 UTC m=+0.171281852 container init 9fb001f9d8857d39cc074509f711e99b0979dd5247c46696d0b50e17cdbae334 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:18:32 compute-0 podman[265715]: 2026-01-23 19:18:32.1423367 +0000 UTC m=+0.179172735 container start 9fb001f9d8857d39cc074509f711e99b0979dd5247c46696d0b50e17cdbae334 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 19:18:32 compute-0 podman[265715]: 2026-01-23 19:18:32.152288772 +0000 UTC m=+0.189124797 container attach 9fb001f9d8857d39cc074509f711e99b0979dd5247c46696d0b50e17cdbae334 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 19:18:32 compute-0 tender_rhodes[265732]: --> passed data devices: 0 physical, 3 LVM
Jan 23 19:18:32 compute-0 tender_rhodes[265732]: --> All data devices are unavailable
Jan 23 19:18:32 compute-0 systemd[1]: libpod-9fb001f9d8857d39cc074509f711e99b0979dd5247c46696d0b50e17cdbae334.scope: Deactivated successfully.
Jan 23 19:18:32 compute-0 podman[265715]: 2026-01-23 19:18:32.601115705 +0000 UTC m=+0.637951720 container died 9fb001f9d8857d39cc074509f711e99b0979dd5247c46696d0b50e17cdbae334 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 23 19:18:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-07ea470c30d527f3b461dcae0581bedca11e646d7e54a212e70ffd0827290d99-merged.mount: Deactivated successfully.
Jan 23 19:18:32 compute-0 podman[265715]: 2026-01-23 19:18:32.64071164 +0000 UTC m=+0.677547635 container remove 9fb001f9d8857d39cc074509f711e99b0979dd5247c46696d0b50e17cdbae334 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_rhodes, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 19:18:32 compute-0 systemd[1]: libpod-conmon-9fb001f9d8857d39cc074509f711e99b0979dd5247c46696d0b50e17cdbae334.scope: Deactivated successfully.
Jan 23 19:18:32 compute-0 sudo[265637]: pam_unix(sudo:session): session closed for user root
Jan 23 19:18:32 compute-0 sudo[265762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:18:32 compute-0 sudo[265762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:18:32 compute-0 sudo[265762]: pam_unix(sudo:session): session closed for user root
Jan 23 19:18:32 compute-0 ceph-mon[75097]: pgmap v1476: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:32 compute-0 sudo[265787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 19:18:32 compute-0 sudo[265787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:18:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:33 compute-0 podman[265823]: 2026-01-23 19:18:33.058017555 +0000 UTC m=+0.039938055 container create 4cd895276a2d40d583ac0589723b6b16393428307e31d6fb5feec1295530d2eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_satoshi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:18:33 compute-0 systemd[1]: Started libpod-conmon-4cd895276a2d40d583ac0589723b6b16393428307e31d6fb5feec1295530d2eb.scope.
Jan 23 19:18:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:18:33 compute-0 podman[265823]: 2026-01-23 19:18:33.040545249 +0000 UTC m=+0.022465769 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:18:33 compute-0 podman[265823]: 2026-01-23 19:18:33.137034519 +0000 UTC m=+0.118955049 container init 4cd895276a2d40d583ac0589723b6b16393428307e31d6fb5feec1295530d2eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 23 19:18:33 compute-0 podman[265823]: 2026-01-23 19:18:33.145777203 +0000 UTC m=+0.127697703 container start 4cd895276a2d40d583ac0589723b6b16393428307e31d6fb5feec1295530d2eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_satoshi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:18:33 compute-0 nervous_satoshi[265840]: 167 167
Jan 23 19:18:33 compute-0 podman[265823]: 2026-01-23 19:18:33.150010586 +0000 UTC m=+0.131931116 container attach 4cd895276a2d40d583ac0589723b6b16393428307e31d6fb5feec1295530d2eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_satoshi, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:18:33 compute-0 systemd[1]: libpod-4cd895276a2d40d583ac0589723b6b16393428307e31d6fb5feec1295530d2eb.scope: Deactivated successfully.
Jan 23 19:18:33 compute-0 podman[265823]: 2026-01-23 19:18:33.151901831 +0000 UTC m=+0.133822341 container died 4cd895276a2d40d583ac0589723b6b16393428307e31d6fb5feec1295530d2eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_satoshi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 23 19:18:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1af88920de704b0a8c5ac14733166640394b68130ebad823010b4ccd23291523-merged.mount: Deactivated successfully.
Jan 23 19:18:33 compute-0 podman[265823]: 2026-01-23 19:18:33.191118067 +0000 UTC m=+0.173038567 container remove 4cd895276a2d40d583ac0589723b6b16393428307e31d6fb5feec1295530d2eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_satoshi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:18:33 compute-0 systemd[1]: libpod-conmon-4cd895276a2d40d583ac0589723b6b16393428307e31d6fb5feec1295530d2eb.scope: Deactivated successfully.
Jan 23 19:18:33 compute-0 podman[265863]: 2026-01-23 19:18:33.347229999 +0000 UTC m=+0.041129343 container create 8433bbe5a471d04a6266aab0c229a4ccb3ec7be820f5acb3b0aeeb33357a7d7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 23 19:18:33 compute-0 systemd[1]: Started libpod-conmon-8433bbe5a471d04a6266aab0c229a4ccb3ec7be820f5acb3b0aeeb33357a7d7c.scope.
Jan 23 19:18:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eeac201161e75c2b47d139d8cbd71f6490c5ef58129a8281d11df8e3c23971f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eeac201161e75c2b47d139d8cbd71f6490c5ef58129a8281d11df8e3c23971f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eeac201161e75c2b47d139d8cbd71f6490c5ef58129a8281d11df8e3c23971f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eeac201161e75c2b47d139d8cbd71f6490c5ef58129a8281d11df8e3c23971f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:18:33 compute-0 podman[265863]: 2026-01-23 19:18:33.329361414 +0000 UTC m=+0.023260778 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:18:33 compute-0 podman[265863]: 2026-01-23 19:18:33.425657779 +0000 UTC m=+0.119557123 container init 8433bbe5a471d04a6266aab0c229a4ccb3ec7be820f5acb3b0aeeb33357a7d7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:18:33 compute-0 podman[265863]: 2026-01-23 19:18:33.431742918 +0000 UTC m=+0.125642262 container start 8433bbe5a471d04a6266aab0c229a4ccb3ec7be820f5acb3b0aeeb33357a7d7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 23 19:18:33 compute-0 podman[265863]: 2026-01-23 19:18:33.435008507 +0000 UTC m=+0.128907851 container attach 8433bbe5a471d04a6266aab0c229a4ccb3ec7be820f5acb3b0aeeb33357a7d7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_diffie, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:18:33 compute-0 sweet_diffie[265879]: {
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:     "0": [
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:         {
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "devices": [
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "/dev/loop3"
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             ],
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "lv_name": "ceph_lv0",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "lv_size": "21470642176",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "name": "ceph_lv0",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "tags": {
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.cluster_name": "ceph",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.crush_device_class": "",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.encrypted": "0",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.objectstore": "bluestore",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.osd_id": "0",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.type": "block",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.vdo": "0",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.with_tpm": "0"
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             },
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "type": "block",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "vg_name": "ceph_vg0"
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:         }
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:     ],
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:     "1": [
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:         {
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "devices": [
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "/dev/loop4"
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             ],
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "lv_name": "ceph_lv1",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "lv_size": "21470642176",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "name": "ceph_lv1",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "tags": {
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.cluster_name": "ceph",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.crush_device_class": "",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.encrypted": "0",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.objectstore": "bluestore",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.osd_id": "1",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.type": "block",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.vdo": "0",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.with_tpm": "0"
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             },
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "type": "block",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "vg_name": "ceph_vg1"
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:         }
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:     ],
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:     "2": [
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:         {
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "devices": [
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "/dev/loop5"
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             ],
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "lv_name": "ceph_lv2",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "lv_size": "21470642176",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "name": "ceph_lv2",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "tags": {
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.cluster_name": "ceph",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.crush_device_class": "",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.encrypted": "0",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.objectstore": "bluestore",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.osd_id": "2",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.type": "block",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.vdo": "0",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:                 "ceph.with_tpm": "0"
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             },
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "type": "block",
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:             "vg_name": "ceph_vg2"
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:         }
Jan 23 19:18:33 compute-0 sweet_diffie[265879]:     ]
Jan 23 19:18:33 compute-0 sweet_diffie[265879]: }
Jan 23 19:18:33 compute-0 systemd[1]: libpod-8433bbe5a471d04a6266aab0c229a4ccb3ec7be820f5acb3b0aeeb33357a7d7c.scope: Deactivated successfully.
Jan 23 19:18:33 compute-0 podman[265888]: 2026-01-23 19:18:33.815217889 +0000 UTC m=+0.026835304 container died 8433bbe5a471d04a6266aab0c229a4ccb3ec7be820f5acb3b0aeeb33357a7d7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 23 19:18:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1eeac201161e75c2b47d139d8cbd71f6490c5ef58129a8281d11df8e3c23971f-merged.mount: Deactivated successfully.
Jan 23 19:18:33 compute-0 podman[265888]: 2026-01-23 19:18:33.846929711 +0000 UTC m=+0.058547106 container remove 8433bbe5a471d04a6266aab0c229a4ccb3ec7be820f5acb3b0aeeb33357a7d7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 19:18:33 compute-0 systemd[1]: libpod-conmon-8433bbe5a471d04a6266aab0c229a4ccb3ec7be820f5acb3b0aeeb33357a7d7c.scope: Deactivated successfully.
Jan 23 19:18:33 compute-0 sudo[265787]: pam_unix(sudo:session): session closed for user root
Jan 23 19:18:33 compute-0 sudo[265904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:18:33 compute-0 sudo[265904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:18:33 compute-0 sudo[265904]: pam_unix(sudo:session): session closed for user root
Jan 23 19:18:34 compute-0 sudo[265929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 19:18:34 compute-0 sudo[265929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:18:34 compute-0 podman[265965]: 2026-01-23 19:18:34.334929518 +0000 UTC m=+0.039811661 container create 321bd984102dbc2db715e37c511941d16a0d4e07bfd86e8adb1e36152935fd94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 19:18:34 compute-0 systemd[1]: Started libpod-conmon-321bd984102dbc2db715e37c511941d16a0d4e07bfd86e8adb1e36152935fd94.scope.
Jan 23 19:18:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:18:34 compute-0 podman[265965]: 2026-01-23 19:18:34.401494789 +0000 UTC m=+0.106376962 container init 321bd984102dbc2db715e37c511941d16a0d4e07bfd86e8adb1e36152935fd94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_rhodes, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 23 19:18:34 compute-0 podman[265965]: 2026-01-23 19:18:34.409873203 +0000 UTC m=+0.114755356 container start 321bd984102dbc2db715e37c511941d16a0d4e07bfd86e8adb1e36152935fd94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 23 19:18:34 compute-0 podman[265965]: 2026-01-23 19:18:34.314902701 +0000 UTC m=+0.019784874 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:18:34 compute-0 podman[265965]: 2026-01-23 19:18:34.413459521 +0000 UTC m=+0.118341694 container attach 321bd984102dbc2db715e37c511941d16a0d4e07bfd86e8adb1e36152935fd94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_rhodes, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Jan 23 19:18:34 compute-0 eloquent_rhodes[265982]: 167 167
Jan 23 19:18:34 compute-0 systemd[1]: libpod-321bd984102dbc2db715e37c511941d16a0d4e07bfd86e8adb1e36152935fd94.scope: Deactivated successfully.
Jan 23 19:18:34 compute-0 podman[265965]: 2026-01-23 19:18:34.415591423 +0000 UTC m=+0.120473596 container died 321bd984102dbc2db715e37c511941d16a0d4e07bfd86e8adb1e36152935fd94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 23 19:18:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e3d340efac0e2fbbe77e213968a258024a9a2acfcfbd4e94a3fa29c1208e092-merged.mount: Deactivated successfully.
Jan 23 19:18:34 compute-0 podman[265965]: 2026-01-23 19:18:34.457031342 +0000 UTC m=+0.161913505 container remove 321bd984102dbc2db715e37c511941d16a0d4e07bfd86e8adb1e36152935fd94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 19:18:34 compute-0 systemd[1]: libpod-conmon-321bd984102dbc2db715e37c511941d16a0d4e07bfd86e8adb1e36152935fd94.scope: Deactivated successfully.
Jan 23 19:18:34 compute-0 podman[266006]: 2026-01-23 19:18:34.657955137 +0000 UTC m=+0.053237638 container create 24bc45a70dd413eba329395a60db9ac243c4db4f1facb4f5cd689b55651f8914 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:18:34 compute-0 systemd[1]: Started libpod-conmon-24bc45a70dd413eba329395a60db9ac243c4db4f1facb4f5cd689b55651f8914.scope.
Jan 23 19:18:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e526b98f74f00bdee941707ca447c5dbbcbd875bfe6c9e2a8fe1f014c62a727/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e526b98f74f00bdee941707ca447c5dbbcbd875bfe6c9e2a8fe1f014c62a727/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e526b98f74f00bdee941707ca447c5dbbcbd875bfe6c9e2a8fe1f014c62a727/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e526b98f74f00bdee941707ca447c5dbbcbd875bfe6c9e2a8fe1f014c62a727/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:18:34 compute-0 podman[266006]: 2026-01-23 19:18:34.636714029 +0000 UTC m=+0.031996640 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:18:34 compute-0 podman[266006]: 2026-01-23 19:18:34.736951771 +0000 UTC m=+0.132234312 container init 24bc45a70dd413eba329395a60db9ac243c4db4f1facb4f5cd689b55651f8914 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 19:18:34 compute-0 podman[266006]: 2026-01-23 19:18:34.745702694 +0000 UTC m=+0.140985205 container start 24bc45a70dd413eba329395a60db9ac243c4db4f1facb4f5cd689b55651f8914 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_hamilton, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:18:34 compute-0 podman[266006]: 2026-01-23 19:18:34.750115091 +0000 UTC m=+0.145397612 container attach 24bc45a70dd413eba329395a60db9ac243c4db4f1facb4f5cd689b55651f8914 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 23 19:18:34 compute-0 ceph-mon[75097]: pgmap v1477: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:35 compute-0 lvm[266102]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:18:35 compute-0 lvm[266103]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:18:35 compute-0 lvm[266102]: VG ceph_vg1 finished
Jan 23 19:18:35 compute-0 lvm[266103]: VG ceph_vg2 finished
Jan 23 19:18:35 compute-0 lvm[266100]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:18:35 compute-0 lvm[266100]: VG ceph_vg0 finished
Jan 23 19:18:35 compute-0 ecstatic_hamilton[266022]: {}
Jan 23 19:18:35 compute-0 systemd[1]: libpod-24bc45a70dd413eba329395a60db9ac243c4db4f1facb4f5cd689b55651f8914.scope: Deactivated successfully.
Jan 23 19:18:35 compute-0 systemd[1]: libpod-24bc45a70dd413eba329395a60db9ac243c4db4f1facb4f5cd689b55651f8914.scope: Consumed 1.396s CPU time.
Jan 23 19:18:35 compute-0 podman[266006]: 2026-01-23 19:18:35.593343692 +0000 UTC m=+0.988626203 container died 24bc45a70dd413eba329395a60db9ac243c4db4f1facb4f5cd689b55651f8914 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:18:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e526b98f74f00bdee941707ca447c5dbbcbd875bfe6c9e2a8fe1f014c62a727-merged.mount: Deactivated successfully.
Jan 23 19:18:35 compute-0 podman[266006]: 2026-01-23 19:18:35.644353684 +0000 UTC m=+1.039636185 container remove 24bc45a70dd413eba329395a60db9ac243c4db4f1facb4f5cd689b55651f8914 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_hamilton, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:18:35 compute-0 systemd[1]: libpod-conmon-24bc45a70dd413eba329395a60db9ac243c4db4f1facb4f5cd689b55651f8914.scope: Deactivated successfully.
Jan 23 19:18:35 compute-0 sudo[265929]: pam_unix(sudo:session): session closed for user root
Jan 23 19:18:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:18:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:18:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:18:35 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:18:35 compute-0 sudo[266118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 19:18:35 compute-0 sudo[266118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:18:35 compute-0 sudo[266118]: pam_unix(sudo:session): session closed for user root
Jan 23 19:18:36 compute-0 ceph-mon[75097]: pgmap v1478: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:36 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:18:36 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:18:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:18:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:37 compute-0 podman[266143]: 2026-01-23 19:18:37.020400373 +0000 UTC m=+0.099121516 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 23 19:18:38 compute-0 ceph-mon[75097]: pgmap v1479: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659032622064907 of space, bias 1.0, pg target 0.1997709786619472 quantized to 32 (current 32)
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005188356691283345 of space, bias 4.0, pg target 0.6226028029540014 quantized to 16 (current 16)
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:18:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:41 compute-0 ceph-mon[75097]: pgmap v1480: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:18:42 compute-0 ceph-mon[75097]: pgmap v1481: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:44 compute-0 ceph-mon[75097]: pgmap v1482: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:46 compute-0 ceph-mon[75097]: pgmap v1483: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:18:48 compute-0 ceph-mon[75097]: pgmap v1484: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:50 compute-0 ceph-mon[75097]: pgmap v1485: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:18:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:53 compute-0 ceph-mon[75097]: pgmap v1486: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:54 compute-0 ceph-mon[75097]: pgmap v1487: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:56 compute-0 ceph-mon[75097]: pgmap v1488: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:18:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/56873707' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:18:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:18:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/56873707' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:18:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:18:58 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/56873707' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:18:58 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/56873707' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:18:58 compute-0 ceph-mon[75097]: pgmap v1489: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:18:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:00 compute-0 podman[266162]: 2026-01-23 19:19:00.047189383 +0000 UTC m=+0.103040242 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 19:19:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:19:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:19:00 compute-0 ceph-mon[75097]: pgmap v1490: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:19:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:19:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:19:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:19:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:19:02 compute-0 ceph-mon[75097]: pgmap v1491: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:04 compute-0 ceph-mon[75097]: pgmap v1492: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:19:07 compute-0 ceph-mon[75097]: pgmap v1493: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:07 compute-0 podman[266190]: 2026-01-23 19:19:07.968578107 +0000 UTC m=+0.050800598 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 23 19:19:08 compute-0 nova_compute[240119]: 2026-01-23 19:19:08.359 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:19:08 compute-0 ceph-mon[75097]: pgmap v1494: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:10 compute-0 ceph-mon[75097]: pgmap v1495: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:11 compute-0 nova_compute[240119]: 2026-01-23 19:19:11.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:19:11 compute-0 nova_compute[240119]: 2026-01-23 19:19:11.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 23 19:19:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:19:12 compute-0 ceph-mon[75097]: pgmap v1496: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:19:13.600 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:19:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:19:13.601 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:19:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:19:13.601 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:19:14 compute-0 nova_compute[240119]: 2026-01-23 19:19:14.364 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:19:14 compute-0 nova_compute[240119]: 2026-01-23 19:19:14.365 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:19:14 compute-0 nova_compute[240119]: 2026-01-23 19:19:14.393 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:19:14 compute-0 nova_compute[240119]: 2026-01-23 19:19:14.393 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:19:14 compute-0 nova_compute[240119]: 2026-01-23 19:19:14.393 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:19:14 compute-0 nova_compute[240119]: 2026-01-23 19:19:14.394 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:19:14 compute-0 nova_compute[240119]: 2026-01-23 19:19:14.394 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:19:14 compute-0 ceph-mon[75097]: pgmap v1497: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:14 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:19:14 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/796140182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:19:14 compute-0 nova_compute[240119]: 2026-01-23 19:19:14.954 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:19:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:15 compute-0 nova_compute[240119]: 2026-01-23 19:19:15.147 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:19:15 compute-0 nova_compute[240119]: 2026-01-23 19:19:15.148 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4938MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:19:15 compute-0 nova_compute[240119]: 2026-01-23 19:19:15.149 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:19:15 compute-0 nova_compute[240119]: 2026-01-23 19:19:15.149 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:19:15 compute-0 nova_compute[240119]: 2026-01-23 19:19:15.218 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:19:15 compute-0 nova_compute[240119]: 2026-01-23 19:19:15.219 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:19:15 compute-0 nova_compute[240119]: 2026-01-23 19:19:15.272 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:19:15 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:19:15 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4260192688' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:19:15 compute-0 nova_compute[240119]: 2026-01-23 19:19:15.816 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:19:15 compute-0 nova_compute[240119]: 2026-01-23 19:19:15.821 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:19:15 compute-0 nova_compute[240119]: 2026-01-23 19:19:15.953 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:19:15 compute-0 nova_compute[240119]: 2026-01-23 19:19:15.954 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:19:15 compute-0 nova_compute[240119]: 2026-01-23 19:19:15.954 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:19:15 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/796140182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:19:15 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4260192688' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:19:16 compute-0 nova_compute[240119]: 2026-01-23 19:19:16.938 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:19:16 compute-0 nova_compute[240119]: 2026-01-23 19:19:16.939 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:19:16 compute-0 nova_compute[240119]: 2026-01-23 19:19:16.939 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:19:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:17 compute-0 ceph-mon[75097]: pgmap v1498: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:19:17 compute-0 nova_compute[240119]: 2026-01-23 19:19:17.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:19:17 compute-0 nova_compute[240119]: 2026-01-23 19:19:17.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:19:18 compute-0 ceph-mon[75097]: pgmap v1499: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:19 compute-0 nova_compute[240119]: 2026-01-23 19:19:19.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:19:19 compute-0 nova_compute[240119]: 2026-01-23 19:19:19.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:19:19 compute-0 nova_compute[240119]: 2026-01-23 19:19:19.350 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:19:19 compute-0 nova_compute[240119]: 2026-01-23 19:19:19.365 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:19:19 compute-0 nova_compute[240119]: 2026-01-23 19:19:19.365 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:19:20 compute-0 ceph-mon[75097]: pgmap v1500: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:19:22 compute-0 ceph-mon[75097]: pgmap v1501: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:22 compute-0 nova_compute[240119]: 2026-01-23 19:19:22.358 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:19:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:23 compute-0 nova_compute[240119]: 2026-01-23 19:19:23.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:19:23 compute-0 nova_compute[240119]: 2026-01-23 19:19:23.349 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 23 19:19:23 compute-0 nova_compute[240119]: 2026-01-23 19:19:23.362 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 23 19:19:24 compute-0 ceph-mon[75097]: pgmap v1502: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:26 compute-0 ceph-mon[75097]: pgmap v1503: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:19:28 compute-0 ceph-mon[75097]: pgmap v1504: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:19:28
Jan 23 19:19:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:19:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:19:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'images']
Jan 23 19:19:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:19:29 compute-0 rsyslogd[1007]: imjournal: 14399 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 23 19:19:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:19:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:19:30 compute-0 ceph-mon[75097]: pgmap v1505: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:19:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:19:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:19:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:19:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:31 compute-0 podman[266254]: 2026-01-23 19:19:31.035542357 +0000 UTC m=+0.117152374 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 19:19:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:19:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:19:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:19:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:19:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:19:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:19:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:19:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:19:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:19:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:19:32 compute-0 ceph-mon[75097]: pgmap v1506: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:19:32 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:34 compute-0 ceph-mon[75097]: pgmap v1507: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:34 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:35 compute-0 sudo[266280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:19:35 compute-0 sudo[266280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:19:35 compute-0 sudo[266280]: pam_unix(sudo:session): session closed for user root
Jan 23 19:19:35 compute-0 sudo[266305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 23 19:19:35 compute-0 sudo[266305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:19:36 compute-0 ceph-mon[75097]: pgmap v1508: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:36 compute-0 sudo[266305]: pam_unix(sudo:session): session closed for user root
Jan 23 19:19:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:19:36 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:19:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 19:19:36 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:19:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 19:19:36 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:37 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:19:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 19:19:37 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:19:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 19:19:37 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:19:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:19:37 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:19:37 compute-0 sudo[266361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:19:37 compute-0 sudo[266361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:19:37 compute-0 sudo[266361]: pam_unix(sudo:session): session closed for user root
Jan 23 19:19:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:19:37 compute-0 sudo[266386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 23 19:19:37 compute-0 sudo[266386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:19:37.521432) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195977521496, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1277, "num_deletes": 251, "total_data_size": 2043521, "memory_usage": 2071312, "flush_reason": "Manual Compaction"}
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 23 19:19:37 compute-0 sshd-session[266252]: Connection closed by 206.168.34.122 port 47158 [preauth]
Jan 23 19:19:37 compute-0 podman[266423]: 2026-01-23 19:19:37.618950639 +0000 UTC m=+0.025820399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195977919147, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1992021, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32158, "largest_seqno": 33434, "table_properties": {"data_size": 1985987, "index_size": 3365, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12601, "raw_average_key_size": 19, "raw_value_size": 1973864, "raw_average_value_size": 3093, "num_data_blocks": 152, "num_entries": 638, "num_filter_entries": 638, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769195847, "oldest_key_time": 1769195847, "file_creation_time": 1769195977, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 397769 microseconds, and 16720 cpu microseconds.
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:19:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:19:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 23 19:19:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:19:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 23 19:19:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 23 19:19:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:19:37 compute-0 podman[266423]: 2026-01-23 19:19:37.922287809 +0000 UTC m=+0.329157539 container create b8c1e4106742dd611659f56600b51f794d31df148a00f0bc0840280f06a4a523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:19:37.919206) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1992021 bytes OK
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:19:37.919229) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:19:37.926066) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:19:37.926094) EVENT_LOG_v1 {"time_micros": 1769195977926086, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:19:37.926116) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 2037787, prev total WAL file size 2072190, number of live WAL files 2.
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:19:37.927159) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1945KB)], [68(8611KB)]
Jan 23 19:19:37 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195977927245, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10809781, "oldest_snapshot_seqno": -1}
Jan 23 19:19:38 compute-0 ceph-mon[75097]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6350 keys, 9027371 bytes, temperature: kUnknown
Jan 23 19:19:38 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195978548341, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9027371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8985964, "index_size": 24401, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15941, "raw_key_size": 160228, "raw_average_key_size": 25, "raw_value_size": 8873184, "raw_average_value_size": 1397, "num_data_blocks": 992, "num_entries": 6350, "num_filter_entries": 6350, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769193244, "oldest_key_time": 0, "file_creation_time": 1769195977, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f6198cf-79c4-4079-bac9-29a7a92f17c9", "db_session_id": "95FKTO3MTV4ZWA92OP2M", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 23 19:19:38 compute-0 ceph-mon[75097]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 19:19:38 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:19:38.548620) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9027371 bytes
Jan 23 19:19:38 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:19:38.566305) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 17.4 rd, 14.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 8.4 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(10.0) write-amplify(4.5) OK, records in: 6864, records dropped: 514 output_compression: NoCompression
Jan 23 19:19:38 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:19:38.566331) EVENT_LOG_v1 {"time_micros": 1769195978566321, "job": 38, "event": "compaction_finished", "compaction_time_micros": 621155, "compaction_time_cpu_micros": 26692, "output_level": 6, "num_output_files": 1, "total_output_size": 9027371, "num_input_records": 6864, "num_output_records": 6350, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 19:19:38 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:19:38 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195978566794, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 23 19:19:38 compute-0 ceph-mon[75097]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 19:19:38 compute-0 ceph-mon[75097]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769195978568519, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 23 19:19:38 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:19:37.927039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:19:38 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:19:38.568594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:19:38 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:19:38.568600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:19:38 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:19:38.568602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:19:38 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:19:38.568604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:19:38 compute-0 ceph-mon[75097]: rocksdb: (Original Log Time 2026/01/23-19:19:38.568605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 19:19:38 compute-0 systemd[1]: Started libpod-conmon-b8c1e4106742dd611659f56600b51f794d31df148a00f0bc0840280f06a4a523.scope.
Jan 23 19:19:38 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:19:38 compute-0 podman[266423]: 2026-01-23 19:19:38.723679228 +0000 UTC m=+1.130548988 container init b8c1e4106742dd611659f56600b51f794d31df148a00f0bc0840280f06a4a523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:19:38 compute-0 podman[266423]: 2026-01-23 19:19:38.733461497 +0000 UTC m=+1.140331237 container start b8c1e4106742dd611659f56600b51f794d31df148a00f0bc0840280f06a4a523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_jang, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:19:38 compute-0 podman[266423]: 2026-01-23 19:19:38.737738861 +0000 UTC m=+1.144608591 container attach b8c1e4106742dd611659f56600b51f794d31df148a00f0bc0840280f06a4a523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_jang, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:19:38 compute-0 quizzical_jang[266437]: 167 167
Jan 23 19:19:38 compute-0 systemd[1]: libpod-b8c1e4106742dd611659f56600b51f794d31df148a00f0bc0840280f06a4a523.scope: Deactivated successfully.
Jan 23 19:19:38 compute-0 podman[266423]: 2026-01-23 19:19:38.740535039 +0000 UTC m=+1.147404779 container died b8c1e4106742dd611659f56600b51f794d31df148a00f0bc0840280f06a4a523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 23 19:19:38 compute-0 ceph-mon[75097]: pgmap v1509: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-49691f4da4c6f8b716c3b18e73e2affaa46968a2e7969ec72688327a8e8d9266-merged.mount: Deactivated successfully.
Jan 23 19:19:38 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:39 compute-0 podman[266423]: 2026-01-23 19:19:39.081445823 +0000 UTC m=+1.488315573 container remove b8c1e4106742dd611659f56600b51f794d31df148a00f0bc0840280f06a4a523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_jang, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Jan 23 19:19:39 compute-0 systemd[1]: libpod-conmon-b8c1e4106742dd611659f56600b51f794d31df148a00f0bc0840280f06a4a523.scope: Deactivated successfully.
Jan 23 19:19:39 compute-0 podman[266438]: 2026-01-23 19:19:39.202477831 +0000 UTC m=+0.584621121 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 23 19:19:39 compute-0 podman[266481]: 2026-01-23 19:19:39.242599399 +0000 UTC m=+0.032233126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:19:39 compute-0 podman[266481]: 2026-01-23 19:19:39.600292801 +0000 UTC m=+0.389926498 container create aa280a4d95246651bfb70e50ea8ff6747dd9fa81e28389de4643587f55467bf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_sinoussi, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:19:39 compute-0 systemd[1]: Started libpod-conmon-aa280a4d95246651bfb70e50ea8ff6747dd9fa81e28389de4643587f55467bf1.scope.
Jan 23 19:19:39 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7233050a92e31ed9b7e551fcbaf950554abd81b50d88249f2938b41c0081c23b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7233050a92e31ed9b7e551fcbaf950554abd81b50d88249f2938b41c0081c23b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7233050a92e31ed9b7e551fcbaf950554abd81b50d88249f2938b41c0081c23b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7233050a92e31ed9b7e551fcbaf950554abd81b50d88249f2938b41c0081c23b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7233050a92e31ed9b7e551fcbaf950554abd81b50d88249f2938b41c0081c23b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 19:19:40 compute-0 podman[266481]: 2026-01-23 19:19:40.242167826 +0000 UTC m=+1.031801533 container init aa280a4d95246651bfb70e50ea8ff6747dd9fa81e28389de4643587f55467bf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 19:19:40 compute-0 podman[266481]: 2026-01-23 19:19:40.250375096 +0000 UTC m=+1.040008813 container start aa280a4d95246651bfb70e50ea8ff6747dd9fa81e28389de4643587f55467bf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_sinoussi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659032622064907 of space, bias 1.0, pg target 0.1997709786619472 quantized to 32 (current 32)
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005188356691283345 of space, bias 4.0, pg target 0.6226028029540014 quantized to 16 (current 16)
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:19:40 compute-0 clever_sinoussi[266497]: --> passed data devices: 0 physical, 3 LVM
Jan 23 19:19:40 compute-0 clever_sinoussi[266497]: --> All data devices are unavailable
Jan 23 19:19:40 compute-0 systemd[1]: libpod-aa280a4d95246651bfb70e50ea8ff6747dd9fa81e28389de4643587f55467bf1.scope: Deactivated successfully.
Jan 23 19:19:40 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:41 compute-0 podman[266481]: 2026-01-23 19:19:41.242164825 +0000 UTC m=+2.031798532 container attach aa280a4d95246651bfb70e50ea8ff6747dd9fa81e28389de4643587f55467bf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_sinoussi, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:19:41 compute-0 podman[266481]: 2026-01-23 19:19:41.244065691 +0000 UTC m=+2.033699398 container died aa280a4d95246651bfb70e50ea8ff6747dd9fa81e28389de4643587f55467bf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:19:41 compute-0 ceph-mon[75097]: pgmap v1510: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7233050a92e31ed9b7e551fcbaf950554abd81b50d88249f2938b41c0081c23b-merged.mount: Deactivated successfully.
Jan 23 19:19:41 compute-0 podman[266481]: 2026-01-23 19:19:41.749507753 +0000 UTC m=+2.539141440 container remove aa280a4d95246651bfb70e50ea8ff6747dd9fa81e28389de4643587f55467bf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:19:41 compute-0 sudo[266386]: pam_unix(sudo:session): session closed for user root
Jan 23 19:19:41 compute-0 systemd[1]: libpod-conmon-aa280a4d95246651bfb70e50ea8ff6747dd9fa81e28389de4643587f55467bf1.scope: Deactivated successfully.
Jan 23 19:19:41 compute-0 sudo[266527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:19:41 compute-0 sudo[266527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:19:41 compute-0 sudo[266527]: pam_unix(sudo:session): session closed for user root
Jan 23 19:19:41 compute-0 sudo[266552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- lvm list --format json
Jan 23 19:19:41 compute-0 sudo[266552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:19:42 compute-0 podman[266589]: 2026-01-23 19:19:42.198588052 +0000 UTC m=+0.047739964 container create 50d454e4a579ea757f6fdbd6ee600f88fba0fcbe9ea5a1b360bc7eaa41ae9dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 23 19:19:42 compute-0 systemd[1]: Started libpod-conmon-50d454e4a579ea757f6fdbd6ee600f88fba0fcbe9ea5a1b360bc7eaa41ae9dd6.scope.
Jan 23 19:19:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:19:42 compute-0 podman[266589]: 2026-01-23 19:19:42.171522443 +0000 UTC m=+0.020674365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:19:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:19:42 compute-0 podman[266589]: 2026-01-23 19:19:42.416278535 +0000 UTC m=+0.265430497 container init 50d454e4a579ea757f6fdbd6ee600f88fba0fcbe9ea5a1b360bc7eaa41ae9dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_blackburn, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 23 19:19:42 compute-0 podman[266589]: 2026-01-23 19:19:42.423815498 +0000 UTC m=+0.272967440 container start 50d454e4a579ea757f6fdbd6ee600f88fba0fcbe9ea5a1b360bc7eaa41ae9dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 23 19:19:42 compute-0 magical_blackburn[266605]: 167 167
Jan 23 19:19:42 compute-0 systemd[1]: libpod-50d454e4a579ea757f6fdbd6ee600f88fba0fcbe9ea5a1b360bc7eaa41ae9dd6.scope: Deactivated successfully.
Jan 23 19:19:42 compute-0 podman[266589]: 2026-01-23 19:19:42.522481681 +0000 UTC m=+0.371633613 container attach 50d454e4a579ea757f6fdbd6ee600f88fba0fcbe9ea5a1b360bc7eaa41ae9dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:19:42 compute-0 podman[266589]: 2026-01-23 19:19:42.524636564 +0000 UTC m=+0.373788526 container died 50d454e4a579ea757f6fdbd6ee600f88fba0fcbe9ea5a1b360bc7eaa41ae9dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 19:19:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb4f5c22b34aa45642a934f12cc49ac818d63c411a18cd005db7bf7fb5e860e6-merged.mount: Deactivated successfully.
Jan 23 19:19:42 compute-0 podman[266589]: 2026-01-23 19:19:42.853204788 +0000 UTC m=+0.702356700 container remove 50d454e4a579ea757f6fdbd6ee600f88fba0fcbe9ea5a1b360bc7eaa41ae9dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:19:42 compute-0 ceph-mon[75097]: pgmap v1511: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:42 compute-0 systemd[1]: libpod-conmon-50d454e4a579ea757f6fdbd6ee600f88fba0fcbe9ea5a1b360bc7eaa41ae9dd6.scope: Deactivated successfully.
Jan 23 19:19:42 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:43 compute-0 podman[266630]: 2026-01-23 19:19:43.033727116 +0000 UTC m=+0.056212531 container create e0c0d72f36d1bbb0015eb20ec27becc613fb4c15b0dbe981d2c08d07629068cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_jang, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:19:43 compute-0 systemd[1]: Started libpod-conmon-e0c0d72f36d1bbb0015eb20ec27becc613fb4c15b0dbe981d2c08d07629068cb.scope.
Jan 23 19:19:43 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3452f8ec8db36ebf6f9efd946a02b4a2c5fee3433dd6a4e580087bd7367b75a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:19:43 compute-0 podman[266630]: 2026-01-23 19:19:43.013294008 +0000 UTC m=+0.035779453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3452f8ec8db36ebf6f9efd946a02b4a2c5fee3433dd6a4e580087bd7367b75a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3452f8ec8db36ebf6f9efd946a02b4a2c5fee3433dd6a4e580087bd7367b75a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3452f8ec8db36ebf6f9efd946a02b4a2c5fee3433dd6a4e580087bd7367b75a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:19:43 compute-0 podman[266630]: 2026-01-23 19:19:43.417171096 +0000 UTC m=+0.439656511 container init e0c0d72f36d1bbb0015eb20ec27becc613fb4c15b0dbe981d2c08d07629068cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 23 19:19:43 compute-0 podman[266630]: 2026-01-23 19:19:43.42513716 +0000 UTC m=+0.447622555 container start e0c0d72f36d1bbb0015eb20ec27becc613fb4c15b0dbe981d2c08d07629068cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_jang, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 19:19:43 compute-0 podman[266630]: 2026-01-23 19:19:43.486545496 +0000 UTC m=+0.509030891 container attach e0c0d72f36d1bbb0015eb20ec27becc613fb4c15b0dbe981d2c08d07629068cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]: {
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:     "0": [
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:         {
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "devices": [
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "/dev/loop3"
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             ],
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "lv_name": "ceph_lv0",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "lv_size": "21470642176",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=066002fe-249e-443f-8741-9dfd30f99103,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "lv_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "name": "ceph_lv0",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "tags": {
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.block_uuid": "3JAvTW-fQVb-oaOl-HfKU-Jgsp-hbm5-vF1IDF",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.cluster_name": "ceph",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.crush_device_class": "",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.encrypted": "0",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.objectstore": "bluestore",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.osd_fsid": "066002fe-249e-443f-8741-9dfd30f99103",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.osd_id": "0",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.type": "block",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.vdo": "0",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.with_tpm": "0"
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             },
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "type": "block",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "vg_name": "ceph_vg0"
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:         }
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:     ],
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:     "1": [
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:         {
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "devices": [
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "/dev/loop4"
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             ],
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "lv_name": "ceph_lv1",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "lv_size": "21470642176",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6ed4487-bd34-4323-9098-4b01a850bb6f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "lv_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "name": "ceph_lv1",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "tags": {
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.block_uuid": "x2mxFh-f6F6-uhju-seb3-YmTZ-N7cN-noF60L",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.cluster_name": "ceph",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.crush_device_class": "",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.encrypted": "0",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.objectstore": "bluestore",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.osd_fsid": "f6ed4487-bd34-4323-9098-4b01a850bb6f",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.osd_id": "1",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.type": "block",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.vdo": "0",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.with_tpm": "0"
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             },
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "type": "block",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "vg_name": "ceph_vg1"
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:         }
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:     ],
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:     "2": [
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:         {
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "devices": [
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "/dev/loop5"
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             ],
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "lv_name": "ceph_lv2",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "lv_size": "21470642176",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4da2adac-d096-5ead-9ec5-3108259ba9e4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a7717231-3361-44cc-a7c1-2d888f6f7851,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "lv_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "name": "ceph_lv2",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "tags": {
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.block_uuid": "TOj1Yj-CwTT-KPoo-26cm-fQX7-kf3g-muJ7h0",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.cluster_fsid": "4da2adac-d096-5ead-9ec5-3108259ba9e4",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.cluster_name": "ceph",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.crush_device_class": "",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.encrypted": "0",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.objectstore": "bluestore",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.osd_fsid": "a7717231-3361-44cc-a7c1-2d888f6f7851",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.osd_id": "2",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.type": "block",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.vdo": "0",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:                 "ceph.with_tpm": "0"
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             },
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "type": "block",
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:             "vg_name": "ceph_vg2"
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:         }
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]:     ]
Jan 23 19:19:43 compute-0 xenodochial_jang[266646]: }
Jan 23 19:19:43 compute-0 systemd[1]: libpod-e0c0d72f36d1bbb0015eb20ec27becc613fb4c15b0dbe981d2c08d07629068cb.scope: Deactivated successfully.
Jan 23 19:19:43 compute-0 podman[266630]: 2026-01-23 19:19:43.744446318 +0000 UTC m=+0.766931713 container died e0c0d72f36d1bbb0015eb20ec27becc613fb4c15b0dbe981d2c08d07629068cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_jang, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 19:19:44 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:45 compute-0 ceph-mon[75097]: pgmap v1512: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3452f8ec8db36ebf6f9efd946a02b4a2c5fee3433dd6a4e580087bd7367b75a0-merged.mount: Deactivated successfully.
Jan 23 19:19:46 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:19:48 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:49 compute-0 ceph-mon[75097]: pgmap v1513: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:49 compute-0 podman[266630]: 2026-01-23 19:19:49.434355417 +0000 UTC m=+6.456840812 container remove e0c0d72f36d1bbb0015eb20ec27becc613fb4c15b0dbe981d2c08d07629068cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_jang, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 19:19:49 compute-0 sudo[266552]: pam_unix(sudo:session): session closed for user root
Jan 23 19:19:49 compute-0 systemd[1]: libpod-conmon-e0c0d72f36d1bbb0015eb20ec27becc613fb4c15b0dbe981d2c08d07629068cb.scope: Deactivated successfully.
Jan 23 19:19:49 compute-0 sudo[266667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 19:19:49 compute-0 sudo[266667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:19:49 compute-0 sudo[266667]: pam_unix(sudo:session): session closed for user root
Jan 23 19:19:49 compute-0 sudo[266692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4da2adac-d096-5ead-9ec5-3108259ba9e4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4da2adac-d096-5ead-9ec5-3108259ba9e4 -- raw list --format json
Jan 23 19:19:49 compute-0 sudo[266692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:19:49 compute-0 podman[266728]: 2026-01-23 19:19:49.954060645 +0000 UTC m=+0.102977318 container create 9cc27ca11d6266d6214843dc6df391d51d11e87cc1612dc59270ec879560f7d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:19:49 compute-0 podman[266728]: 2026-01-23 19:19:49.876870826 +0000 UTC m=+0.025787519 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:19:50 compute-0 systemd[1]: Started libpod-conmon-9cc27ca11d6266d6214843dc6df391d51d11e87cc1612dc59270ec879560f7d6.scope.
Jan 23 19:19:50 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:19:50 compute-0 podman[266728]: 2026-01-23 19:19:50.183190857 +0000 UTC m=+0.332107530 container init 9cc27ca11d6266d6214843dc6df391d51d11e87cc1612dc59270ec879560f7d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_jemison, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 23 19:19:50 compute-0 podman[266728]: 2026-01-23 19:19:50.189660004 +0000 UTC m=+0.338576677 container start 9cc27ca11d6266d6214843dc6df391d51d11e87cc1612dc59270ec879560f7d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 23 19:19:50 compute-0 romantic_jemison[266745]: 167 167
Jan 23 19:19:50 compute-0 systemd[1]: libpod-9cc27ca11d6266d6214843dc6df391d51d11e87cc1612dc59270ec879560f7d6.scope: Deactivated successfully.
Jan 23 19:19:50 compute-0 podman[266728]: 2026-01-23 19:19:50.500769553 +0000 UTC m=+0.649686226 container attach 9cc27ca11d6266d6214843dc6df391d51d11e87cc1612dc59270ec879560f7d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:19:50 compute-0 podman[266728]: 2026-01-23 19:19:50.501371097 +0000 UTC m=+0.650287780 container died 9cc27ca11d6266d6214843dc6df391d51d11e87cc1612dc59270ec879560f7d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_jemison, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 19:19:50 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:51 compute-0 ceph-mon[75097]: pgmap v1514: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:51 compute-0 ceph-mon[75097]: pgmap v1515: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d357ea5fdc8782e18b6b205bd1b7b9aba5cf258850659170500b246ac63c470-merged.mount: Deactivated successfully.
Jan 23 19:19:51 compute-0 podman[266728]: 2026-01-23 19:19:51.383212899 +0000 UTC m=+1.532129572 container remove 9cc27ca11d6266d6214843dc6df391d51d11e87cc1612dc59270ec879560f7d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 23 19:19:51 compute-0 systemd[1]: libpod-conmon-9cc27ca11d6266d6214843dc6df391d51d11e87cc1612dc59270ec879560f7d6.scope: Deactivated successfully.
Jan 23 19:19:51 compute-0 podman[266769]: 2026-01-23 19:19:51.584463971 +0000 UTC m=+0.077388687 container create 7150e2ffa3451d8e056396900b196211c9ed4db03a46951b3b16690412776f97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 19:19:51 compute-0 podman[266769]: 2026-01-23 19:19:51.528038426 +0000 UTC m=+0.020963362 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 23 19:19:51 compute-0 systemd[1]: Started libpod-conmon-7150e2ffa3451d8e056396900b196211c9ed4db03a46951b3b16690412776f97.scope.
Jan 23 19:19:51 compute-0 systemd[1]: Started libcrun container.
Jan 23 19:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1d505600942f149dff604dd4ef867e7765add242c17653c575afd57d44f743/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 19:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1d505600942f149dff604dd4ef867e7765add242c17653c575afd57d44f743/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 19:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1d505600942f149dff604dd4ef867e7765add242c17653c575afd57d44f743/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 19:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1d505600942f149dff604dd4ef867e7765add242c17653c575afd57d44f743/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 19:19:51 compute-0 podman[266769]: 2026-01-23 19:19:51.935110292 +0000 UTC m=+0.428034998 container init 7150e2ffa3451d8e056396900b196211c9ed4db03a46951b3b16690412776f97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_clarke, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:19:51 compute-0 podman[266769]: 2026-01-23 19:19:51.944907941 +0000 UTC m=+0.437832637 container start 7150e2ffa3451d8e056396900b196211c9ed4db03a46951b3b16690412776f97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:19:52 compute-0 podman[266769]: 2026-01-23 19:19:52.022266785 +0000 UTC m=+0.515191481 container attach 7150e2ffa3451d8e056396900b196211c9ed4db03a46951b3b16690412776f97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 19:19:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:19:52 compute-0 lvm[266865]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:19:52 compute-0 lvm[266864]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:19:52 compute-0 lvm[266865]: VG ceph_vg1 finished
Jan 23 19:19:52 compute-0 lvm[266864]: VG ceph_vg0 finished
Jan 23 19:19:52 compute-0 lvm[266867]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:19:52 compute-0 lvm[266867]: VG ceph_vg2 finished
Jan 23 19:19:52 compute-0 ceph-mon[75097]: pgmap v1516: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:52 compute-0 crazy_clarke[266786]: {}
Jan 23 19:19:52 compute-0 systemd[1]: libpod-7150e2ffa3451d8e056396900b196211c9ed4db03a46951b3b16690412776f97.scope: Deactivated successfully.
Jan 23 19:19:52 compute-0 systemd[1]: libpod-7150e2ffa3451d8e056396900b196211c9ed4db03a46951b3b16690412776f97.scope: Consumed 1.284s CPU time.
Jan 23 19:19:52 compute-0 podman[266769]: 2026-01-23 19:19:52.764680409 +0000 UTC m=+1.257605105 container died 7150e2ffa3451d8e056396900b196211c9ed4db03a46951b3b16690412776f97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 23 19:19:52 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec1d505600942f149dff604dd4ef867e7765add242c17653c575afd57d44f743-merged.mount: Deactivated successfully.
Jan 23 19:19:53 compute-0 podman[266769]: 2026-01-23 19:19:53.264958945 +0000 UTC m=+1.757883641 container remove 7150e2ffa3451d8e056396900b196211c9ed4db03a46951b3b16690412776f97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 19:19:53 compute-0 systemd[1]: libpod-conmon-7150e2ffa3451d8e056396900b196211c9ed4db03a46951b3b16690412776f97.scope: Deactivated successfully.
Jan 23 19:19:53 compute-0 sudo[266692]: pam_unix(sudo:session): session closed for user root
Jan 23 19:19:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 19:19:53 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:19:53 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 19:19:53 compute-0 ceph-mon[75097]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:19:53 compute-0 sudo[266884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 19:19:53 compute-0 sudo[266884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 19:19:53 compute-0 sudo[266884]: pam_unix(sudo:session): session closed for user root
Jan 23 19:19:54 compute-0 ceph-mon[75097]: pgmap v1517: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:54 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:19:54 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' 
Jan 23 19:19:54 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:55 compute-0 ceph-mon[75097]: pgmap v1518: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 19:19:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/682299721' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:19:56 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 19:19:56 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/682299721' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:19:56 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/682299721' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 23 19:19:57 compute-0 ceph-mon[75097]: from='client.? 192.168.122.10:0/682299721' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 23 19:19:57 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:19:58 compute-0 ceph-mon[75097]: pgmap v1519: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:19:58 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:20:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:20:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:20:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:20:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:20:00 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:20:00 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:01 compute-0 ceph-mon[75097]: pgmap v1520: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:02 compute-0 podman[266909]: 2026-01-23 19:20:02.026414383 +0000 UTC m=+0.100316284 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 19:20:02 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:20:02 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:03 compute-0 ceph-mon[75097]: pgmap v1521: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:04 compute-0 ceph-mon[75097]: pgmap v1522: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:04 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:06 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:07 compute-0 ceph-mon[75097]: pgmap v1523: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:07 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:20:08 compute-0 ceph-mon[75097]: pgmap v1524: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:08 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:09 compute-0 nova_compute[240119]: 2026-01-23 19:20:09.357 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:20:09 compute-0 podman[266936]: 2026-01-23 19:20:09.970363077 +0000 UTC m=+0.053394301 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 19:20:10 compute-0 ceph-mon[75097]: pgmap v1525: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:10 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:11 compute-0 ceph-mon[75097]: pgmap v1526: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:12 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:20:12 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:20:13.601 155616 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:20:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:20:13.602 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:20:13 compute-0 ovn_metadata_agent[155611]: 2026-01-23 19:20:13.602 155616 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:20:14 compute-0 ceph-mon[75097]: pgmap v1527: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:14 compute-0 nova_compute[240119]: 2026-01-23 19:20:14.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:20:14 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:16 compute-0 ceph-mon[75097]: pgmap v1528: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:16 compute-0 nova_compute[240119]: 2026-01-23 19:20:16.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:20:16 compute-0 nova_compute[240119]: 2026-01-23 19:20:16.349 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:20:16 compute-0 nova_compute[240119]: 2026-01-23 19:20:16.561 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:20:16 compute-0 nova_compute[240119]: 2026-01-23 19:20:16.562 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:20:16 compute-0 nova_compute[240119]: 2026-01-23 19:20:16.562 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:20:16 compute-0 nova_compute[240119]: 2026-01-23 19:20:16.563 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 19:20:16 compute-0 nova_compute[240119]: 2026-01-23 19:20:16.563 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:20:16 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:20:17 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2746764738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:20:17 compute-0 nova_compute[240119]: 2026-01-23 19:20:17.130 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:20:17 compute-0 nova_compute[240119]: 2026-01-23 19:20:17.327 240123 WARNING nova.virt.libvirt.driver [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 19:20:17 compute-0 nova_compute[240119]: 2026-01-23 19:20:17.329 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4919MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 19:20:17 compute-0 nova_compute[240119]: 2026-01-23 19:20:17.329 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 19:20:17 compute-0 nova_compute[240119]: 2026-01-23 19:20:17.329 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 19:20:17 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2746764738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:20:17 compute-0 nova_compute[240119]: 2026-01-23 19:20:17.778 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 19:20:17 compute-0 nova_compute[240119]: 2026-01-23 19:20:17.779 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 19:20:17 compute-0 nova_compute[240119]: 2026-01-23 19:20:17.867 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Refreshing inventories for resource provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 19:20:17 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:20:17 compute-0 nova_compute[240119]: 2026-01-23 19:20:17.971 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Updating ProviderTree inventory for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 19:20:17 compute-0 nova_compute[240119]: 2026-01-23 19:20:17.972 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Updating inventory in ProviderTree for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 19:20:17 compute-0 nova_compute[240119]: 2026-01-23 19:20:17.986 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Refreshing aggregate associations for resource provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 19:20:18 compute-0 nova_compute[240119]: 2026-01-23 19:20:18.006 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Refreshing trait associations for resource provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE42,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 19:20:18 compute-0 nova_compute[240119]: 2026-01-23 19:20:18.023 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 19:20:18 compute-0 ceph-mon[75097]: pgmap v1529: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:18 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 19:20:18 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4166602585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:20:18 compute-0 nova_compute[240119]: 2026-01-23 19:20:18.626 240123 DEBUG oslo_concurrency.processutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 19:20:18 compute-0 nova_compute[240119]: 2026-01-23 19:20:18.632 240123 DEBUG nova.compute.provider_tree [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed in ProviderTree for provider: 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 19:20:18 compute-0 nova_compute[240119]: 2026-01-23 19:20:18.647 240123 DEBUG nova.scheduler.client.report [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Inventory has not changed for provider 64fe6ca7-28b4-4adf-a124-1c61d6858eb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 19:20:18 compute-0 nova_compute[240119]: 2026-01-23 19:20:18.648 240123 DEBUG nova.compute.resource_tracker [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 19:20:18 compute-0 nova_compute[240119]: 2026-01-23 19:20:18.649 240123 DEBUG oslo_concurrency.lockutils [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 19:20:18 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:19 compute-0 sshd-session[267000]: Accepted publickey for zuul from 192.168.122.10 port 52916 ssh2: ECDSA SHA256:/SAe7NyurzuctsgOZZul1Gwzmir/PVARM17tVeVeXp4
Jan 23 19:20:19 compute-0 systemd-logind[792]: New session 54 of user zuul.
Jan 23 19:20:19 compute-0 systemd[1]: Started Session 54 of User zuul.
Jan 23 19:20:19 compute-0 sshd-session[267000]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 19:20:19 compute-0 sudo[267004]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 23 19:20:19 compute-0 sudo[267004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 19:20:19 compute-0 nova_compute[240119]: 2026-01-23 19:20:19.648 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:20:19 compute-0 nova_compute[240119]: 2026-01-23 19:20:19.648 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:20:19 compute-0 nova_compute[240119]: 2026-01-23 19:20:19.649 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:20:19 compute-0 nova_compute[240119]: 2026-01-23 19:20:19.649 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 19:20:19 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4166602585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 23 19:20:20 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:21 compute-0 ceph-mon[75097]: pgmap v1530: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:21 compute-0 nova_compute[240119]: 2026-01-23 19:20:21.350 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:20:21 compute-0 nova_compute[240119]: 2026-01-23 19:20:21.351 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 19:20:21 compute-0 nova_compute[240119]: 2026-01-23 19:20:21.352 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 19:20:21 compute-0 nova_compute[240119]: 2026-01-23 19:20:21.372 240123 DEBUG nova.compute.manager [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 19:20:21 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14824 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:22 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14826 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:22 compute-0 ceph-mon[75097]: pgmap v1531: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:22 compute-0 ceph-mon[75097]: from='client.14824 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:20:22 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 23 19:20:22 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1096685540' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 23 19:20:22 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:23 compute-0 nova_compute[240119]: 2026-01-23 19:20:23.366 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:20:23 compute-0 ceph-mon[75097]: from='client.14826 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:23 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1096685540' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 23 19:20:24 compute-0 nova_compute[240119]: 2026-01-23 19:20:24.348 240123 DEBUG oslo_service.periodic_task [None req-c3ff176f-4575-4984-ba53-bc5858eca076 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 19:20:24 compute-0 ceph-mon[75097]: pgmap v1532: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:24 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:25 compute-0 ovs-vsctl[267287]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 23 19:20:26 compute-0 virtqemud[240019]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 23 19:20:26 compute-0 virtqemud[240019]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 23 19:20:26 compute-0 virtqemud[240019]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 23 19:20:26 compute-0 ceph-mon[75097]: pgmap v1533: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:26 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:27 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: cache status {prefix=cache status} (starting...)
Jan 23 19:20:27 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: client ls {prefix=client ls} (starting...)
Jan 23 19:20:27 compute-0 lvm[267612]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 19:20:27 compute-0 lvm[267612]: VG ceph_vg0 finished
Jan 23 19:20:27 compute-0 lvm[267636]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 23 19:20:27 compute-0 lvm[267636]: VG ceph_vg2 finished
Jan 23 19:20:27 compute-0 lvm[267639]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 23 19:20:27 compute-0 lvm[267639]: VG ceph_vg1 finished
Jan 23 19:20:27 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14830 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:27 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:20:27 compute-0 ceph-mon[75097]: pgmap v1534: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:28 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: damage ls {prefix=damage ls} (starting...)
Jan 23 19:20:28 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: dump loads {prefix=dump loads} (starting...)
Jan 23 19:20:28 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14832 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:28 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 23 19:20:28 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 23 19:20:28 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14836 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:28 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 23 19:20:28 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 23 19:20:28 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/381066543' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 23 19:20:28 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 23 19:20:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Optimize plan auto_2026-01-23_19:20:28
Jan 23 19:20:28 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:28 compute-0 ceph-mgr[75390]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 19:20:28 compute-0 ceph-mgr[75390]: [balancer INFO root] do_upmap
Jan 23 19:20:28 compute-0 ceph-mgr[75390]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'images', 'backups']
Jan 23 19:20:28 compute-0 ceph-mgr[75390]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 19:20:29 compute-0 ceph-mon[75097]: from='client.14830 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:29 compute-0 ceph-mon[75097]: from='client.14832 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:29 compute-0 ceph-mon[75097]: from='client.14836 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:29 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/381066543' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 23 19:20:29 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 23 19:20:29 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14838 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:29 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:20:29.230+0000 7f06d8601640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 19:20:29 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 19:20:29 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 23 19:20:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 19:20:29 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3749567955' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:20:29 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: ops {prefix=ops} (starting...)
Jan 23 19:20:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 23 19:20:29 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3636227044' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 23 19:20:29 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 23 19:20:29 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2523186663' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 23 19:20:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:20:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:20:30 compute-0 ceph-mon[75097]: pgmap v1535: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:30 compute-0 ceph-mon[75097]: from='client.14838 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:30 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3749567955' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 23 19:20:30 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3636227044' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 23 19:20:30 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2523186663' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 23 19:20:30 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: session ls {prefix=session ls} (starting...)
Jan 23 19:20:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 23 19:20:30 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2412298220' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 23 19:20:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 23 19:20:30 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/854433320' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 23 19:20:30 compute-0 ceph-mds[96162]: mds.cephfs.compute-0.kgapua asok_command: status {prefix=status} (starting...)
Jan 23 19:20:30 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14850 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:20:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:20:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 19:20:30 compute-0 ceph-mgr[75390]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 19:20:30 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 23 19:20:30 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/759426179' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 23 19:20:30 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 19:20:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:20:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 19:20:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 19:20:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:20:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 19:20:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:20:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:20:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 19:20:31 compute-0 ceph-mgr[75390]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 19:20:31 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2412298220' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 23 19:20:31 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/854433320' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 23 19:20:31 compute-0 ceph-mon[75097]: from='client.14850 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:31 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/759426179' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 23 19:20:31 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14854 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 23 19:20:31 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1294027387' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 23 19:20:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 23 19:20:31 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/644576545' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 23 19:20:31 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 23 19:20:31 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4286374353' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 23 19:20:32 compute-0 ceph-mon[75097]: pgmap v1536: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:32 compute-0 ceph-mon[75097]: from='client.14854 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:32 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1294027387' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 23 19:20:32 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/644576545' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 23 19:20:32 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4286374353' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 23 19:20:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 23 19:20:32 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204281759' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 23 19:20:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 23 19:20:32 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1799015010' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 23 19:20:32 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14866 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:32 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 19:20:32 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:20:32.748+0000 7f06d8601640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 19:20:32 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:20:33 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:33 compute-0 podman[268343]: 2026-01-23 19:20:33.022096229 +0000 UTC m=+0.092564746 container health_status 0ce48fbc7286fd18759253fcf8d8118262b5bfa572e67e8dfb5e735e620db0cd (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 23 19:20:33 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 23 19:20:33 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4001095600' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 23 19:20:33 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14872 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:02.504625+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:03.504759+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:04.504986+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 491520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:05.505214+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 491520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:06.505343+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 491520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:07.505465+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 491520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:08.505838+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 491520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:09.506005+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:10.506145+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:11.506294+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:12.506450+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:13.506610+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:14.506730+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:15.513220+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:16.513373+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:17.513509+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:18.513657+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:19.513785+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:20.513900+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:21.514011+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:22.514127+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:23.514243+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:24.514436+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:25.514610+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:26.514754+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:27.514915+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:28.515047+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:29.515664+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:30.516144+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:31.516351+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:32.516726+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:33.517490+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:34.517672+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:35.517852+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:36.518130+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:37.518466+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:38.518753+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:39.518905+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 450560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:40.519334+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 450560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:41.519474+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 450560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:42.519623+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 450560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:43.519763+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 450560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:44.519909+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 434176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:45.520066+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 434176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:46.520222+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 434176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:47.520374+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 434176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:48.520537+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 434176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:49.520628+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:50.520773+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:51.520929+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:52.521084+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:53.521269+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:54.521419+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:55.521616+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:56.521732+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:57.521937+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:58.522088+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:59.522229+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:00.522402+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 401408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:01.522647+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 401408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:02.522798+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 401408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:03.522931+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 401408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:04.523080+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:05.523265+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:06.523408+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:07.523544+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:08.523663+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:09.523814+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:10.523945+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:11.524170+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:12.524375+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:13.524508+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:14.524792+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:15.525025+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:16.525158+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:17.525384+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:18.529212+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:19.529342+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:20.529498+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:21.529639+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:22.530381+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:23.530827+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:24.531051+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:25.531290+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:26.531468+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:27.531632+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:28.531825+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:29.531994+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:30.532120+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:31.532319+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:32.532473+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:33.532616+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:34.532779+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:35.532952+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:36.533087+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:37.533198+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:38.533377+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:39.533604+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:40.533751+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:41.533920+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:42.534074+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:43.534306+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:44.534425+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:45.534627+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:46.534750+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:47.534876+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:48.535010+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:49.535174+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:50.535303+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:51.535424+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:52.535605+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:53.535755+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:54.535890+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:55.536084+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:56.536303+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:57.536490+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:58.536637+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:59.536787+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:00.536988+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:01.537148+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:02.537306+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:03.537518+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:04.537742+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:05.537983+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:06.538158+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:07.538293+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:08.538455+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:09.538593+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:10.538778+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:11.538950+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:12.539084+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:13.539237+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:14.539358+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:15.539531+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:16.539675+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:17.539844+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:18.540001+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:19.540134+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:20.540343+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:21.540471+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:22.540813+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:23.540950+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:24.541075+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:25.541240+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:26.541463+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:27.541654+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:28.541796+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:29.541930+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:30.542066+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:31.542196+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:32.542320+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:33.542439+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:34.542627+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:35.542801+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:36.542979+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:37.543142+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:38.543335+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:39.543497+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:40.543766+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:41.543909+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:42.544030+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:43.544881+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:44.545038+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:45.545201+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:46.545363+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:47.545543+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:48.545959+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:49.546134+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:50.546263+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:51.546461+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:52.546849+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:53.547014+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:54.547144+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:55.547337+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:56.547610+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:57.547783+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:58.547926+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:59.548086+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:00.548370+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:01.548509+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:02.548635+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:03.548756+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:04.548894+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:05.549055+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:06.549313+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:07.549433+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:08.549621+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc ms_handle_reset ms_handle_reset con 0x564373d84000
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4161918394
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: get_auth_request con 0x564373ac8400 auth_method 0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc handle_mgr_configure stats_period=5
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:09.549747+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:10.549872+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:11.549985+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:12.550099+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:13.550212+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:14.550357+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:15.550553+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:16.550727+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:17.550858+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:18.551354+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:19.551512+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:20.551634+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:21.551764+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:22.551896+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:23.552028+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:24.552176+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:25.552357+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:26.552512+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:27.552661+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:28.552815+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:29.552943+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:30.553094+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:31.553236+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:32.553360+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:33.553498+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:34.553617+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:35.553923+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:36.554100+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:37.554234+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:38.554390+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:39.554628+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:40.554807+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:41.554959+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:42.555101+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:43.555942+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:44.556600+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:45.557082+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:46.557297+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:47.558083+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:48.558788+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:49.559507+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:50.559734+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:51.560383+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:52.560552+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:53.560968+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:54.561413+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:55.561628+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:56.561764+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:57.561894+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:58.562055+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 843776 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:59.562243+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921066 data_alloc: 218103808 data_used: 5328
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:00.562453+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 827392 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:01.562703+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 827392 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:02.562957+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 827392 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: handle_auth_request added challenge on 0x564375ddf800
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 300.236267090s of 300.497070312s, submitted: 90
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:03.563156+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:04.563487+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:05.563697+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:06.563840+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:07.563974+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:08.564139+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:09.564363+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:10.564534+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:11.564669+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:12.564863+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:13.565021+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:14.565178+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 753664 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:15.565337+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 753664 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:16.565639+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 753664 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:17.565836+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 753664 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:18.566019+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 753664 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:19.566220+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:20.566341+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:21.566509+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:22.566683+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:23.566805+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:24.566953+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:25.567123+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:26.567297+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:27.567441+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:28.567646+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:29.567788+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:30.567979+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:31.568156+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:32.568301+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:33.568458+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:34.568634+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:35.568799+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:36.568930+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:37.569083+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:38.569340+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:39.569472+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:40.569611+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:41.569740+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:42.569885+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:43.570025+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:44.570166+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:45.570381+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:46.570580+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:47.570729+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:48.570847+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:49.571020+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:50.571194+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:51.571380+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:52.571512+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:53.571644+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:54.571799+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:55.571981+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:56.572101+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:57.572294+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:58.572470+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:59.572619+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:00.572765+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:01.572918+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:02.573073+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:03.573230+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:04.573337+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:05.573462+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:06.573602+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:07.573750+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:08.573875+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:09.574172+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:10.574307+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:11.574462+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:12.574611+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:13.574744+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:14.574922+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:15.575103+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:16.575244+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:17.575374+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:18.575487+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:19.575613+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:20.575734+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:21.575860+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:22.575972+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:23.576093+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:24.576250+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:25.576462+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:26.576649+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:27.576851+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:28.577040+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:29.577172+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 720896 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:30.577346+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:31.577503+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:32.577635+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:33.577782+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:34.577919+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:35.578073+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:36.578218+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:37.578360+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:38.578481+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:39.578635+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:40.578774+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:41.578896+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:42.579030+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:43.579157+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:44.579266+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:45.579462+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:46.579648+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:47.580108+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:48.580291+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:49.580504+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:50.580770+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:51.580943+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:52.581057+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:53.581175+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:54.581328+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:55.581481+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:56.581636+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:57.581745+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:58.581875+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:59.581999+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 696320 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:00.582223+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 696320 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:01.582458+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:02.582626+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:03.582756+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:04.582946+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:05.583187+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:06.583322+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:07.583456+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:08.583585+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:09.583765+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:10.583975+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:11.584136+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:12.584263+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:13.584395+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:14.584573+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:15.584733+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:16.584897+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:17.585121+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:18.585272+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:19.585428+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:20.585625+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:21.585779+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:22.585917+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:23.586044+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:24.586229+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:25.586492+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:26.586695+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:27.586902+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:28.587097+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:29.587283+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:30.587439+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:31.587671+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:32.587856+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:33.587992+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:34.588187+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:35.588395+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:36.588523+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:37.588691+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:38.588871+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:39.589076+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:40.589269+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:41.589473+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:42.589714+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:43.589867+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:44.590006+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:45.590193+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:46.590318+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:47.590477+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:48.590662+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:49.590815+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:50.591019+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:51.591174+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:52.591381+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:53.591644+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:54.591817+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:55.592241+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:56.592386+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:57.592510+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:58.592692+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:59.592901+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 622592 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:00.593132+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:01.593321+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:02.593515+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:03.593706+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:04.593863+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:05.594042+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:06.594197+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:07.594427+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:08.594618+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:09.594807+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:10.595007+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:11.595228+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:12.595370+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:13.595530+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread fragmentation_score=0.000141 took=0.000048s
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:14.595682+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:15.595878+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:16.596095+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:17.596270+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:18.596419+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:19.596648+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 598016 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:20.596820+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 598016 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:21.596952+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 598016 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:22.597122+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 598016 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:23.597257+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 598016 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:24.597401+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:25.597592+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:26.597723+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:27.597862+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:28.598015+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:29.598470+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:30.599993+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:31.601075+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:32.601815+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:33.602394+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:34.602894+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:35.603359+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:36.603723+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:37.603982+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:38.604202+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:39.604422+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:40.604766+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:41.605100+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:42.605408+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:43.605697+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:44.605954+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:45.606210+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:46.606786+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:47.607034+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:48.607195+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:49.607439+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:50.607627+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:51.607848+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:52.608005+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:53.608149+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:54.608282+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:55.608474+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:56.608636+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:57.608795+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:58.608978+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:59.609139+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:00.609356+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 5664 writes, 24K keys, 5664 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5664 writes, 915 syncs, 6.19 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5643720d5a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:01.609488+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:02.612240+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:03.612400+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:04.612616+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:05.612785+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:06.612959+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:07.613115+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:08.613281+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:09.613414+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:10.613640+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:11.613819+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:12.613988+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:13.614147+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:14.614336+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:15.614455+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:16.614612+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:17.614790+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:18.615009+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:19.615266+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:20.615469+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:21.615626+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:22.615775+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:23.616111+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:24.616383+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:25.616713+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:26.616905+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:27.617079+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:28.617349+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:29.617619+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:30.617774+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:31.617909+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:32.618083+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:33.618289+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:34.618676+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:35.618961+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:36.619166+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:37.619441+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:38.619667+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:39.619794+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:40.619973+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:41.620160+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:42.620340+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:43.620549+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:44.620863+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:45.621061+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:46.621252+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:47.621446+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:48.621659+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:49.621984+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:50.622145+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:51.622352+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:52.622509+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:53.622686+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:54.622858+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:55.623023+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:56.623193+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:57.623342+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:58.623533+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:59.623745+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:00.623879+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:01.624046+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 606208 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:02.624273+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 606208 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.751983643s of 299.792205811s, submitted: 24
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:03.624412+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 606208 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [0,0,0,1])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:04.624664+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 1564672 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [0,0,0,0,1])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:05.624844+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1556480 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:06.625314+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1556480 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:07.625653+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1556480 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:08.625856+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1556480 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:09.626054+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1556480 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:10.626295+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1556480 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:11.626458+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1556480 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:12.626668+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:13.626806+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:14.626963+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:15.627215+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:16.627352+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:17.627493+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:18.627621+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:19.627757+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:20.627932+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:21.628076+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:22.628227+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:23.628376+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:24.628550+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:25.628781+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:26.628947+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:27.629086+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:28.629344+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:29.629482+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:30.629673+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:31.629942+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:32.630208+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:33.630356+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:34.630496+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:35.630731+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:36.630928+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:37.631006+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:38.631133+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:39.631308+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:40.631448+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:41.631640+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:42.631806+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:43.631967+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:44.632157+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:45.632293+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:46.632448+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:47.632779+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:48.632943+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:49.633061+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:50.633229+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:51.633405+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:52.633613+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:53.633765+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:54.634027+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:55.634267+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:56.634430+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:57.634587+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:58.634671+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:59.634827+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:00.635001+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:01.635161+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:02.635346+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:03.635510+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:04.635681+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:05.635938+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:06.636107+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:07.636221+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:08.636400+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:09.636659+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:10.636784+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:11.636917+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:12.637012+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:13.637134+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:14.637266+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:15.637449+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:16.637616+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:17.637722+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:18.637872+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:19.638003+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:20.638137+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:21.638290+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:22.638415+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:23.638585+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:24.638797+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:25.638993+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:26.639150+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:27.639287+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:28.639447+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:29.639627+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:30.639758+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:31.639878+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:32.640034+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:33.640202+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:34.640344+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:35.640627+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:36.640870+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:37.641162+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:38.641332+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:39.641468+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:40.641629+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:41.641794+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:42.641934+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:43.642001+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:44.642168+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:45.642335+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:46.642462+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:47.642579+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:48.642962+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:49.643075+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:50.643164+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:51.643304+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1499136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:52.643416+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1490944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:53.643640+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1490944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:54.643795+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1490944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:55.643962+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1490944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:56.644125+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1490944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:57.644261+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1490944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:58.644427+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1490944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:59.644618+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1490944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:00.644840+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:01.645055+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:02.645425+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:03.645628+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:04.645869+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:05.646122+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:06.646309+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:07.646503+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:08.646716+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:09.646890+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1458176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:10.647029+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1458176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:11.647175+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1458176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:12.647323+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1458176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:13.647480+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1458176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:14.647851+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:15.648114+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:16.648355+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:17.648608+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:18.648811+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:19.649034+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:20.649254+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:21.649433+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:22.649590+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:23.649744+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:24.650003+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:25.650278+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:26.650521+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:27.650679+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:28.650822+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:29.651093+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1425408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:30.651316+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1425408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:31.651479+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1425408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:32.651698+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1425408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:33.651861+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1425408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:34.652074+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:35.652261+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:36.652430+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:37.652650+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:38.652878+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:39.653054+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:40.653272+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:41.653425+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:42.653553+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:43.653686+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:44.653816+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:45.653987+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1400832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:46.654191+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1400832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:47.654396+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1400832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:48.654670+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1400832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:49.654891+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1400832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:50.655121+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1400832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:51.655292+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1400832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:52.655523+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1400832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:53.655741+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1384448 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:54.655967+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1384448 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:55.656191+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1384448 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:56.656326+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921450 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1384448 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xb2c70/0x167000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:57.656470+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1384448 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:58.656614+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: handle_auth_request added challenge on 0x564375ddfc00
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 1245184 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 175.503921509s of 176.320312500s, submitted: 90
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:59.656779+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1236992 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:00.656944+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb480c/0x16a000, compress 0x0/0x0/0x0, omap 0xf6b4, meta 0x2bc094c), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 17883136 heap: 92192768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:01.657129+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973073 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 17874944 heap: 92192768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:02.657274+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 121 ms_handle_reset con 0x564375ddfc00 session 0x564373aa7a40
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 17850368 heap: 92192768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: handle_auth_request added challenge on 0x5643756ef400
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:03.657478+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 24731648 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:04.657667+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 121 handle_osd_map epochs [121,122], i have 122, src has [1,122]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 122 ms_handle_reset con 0x5643756ef400 session 0x5643762716c0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 24698880 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:05.657851+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 24698880 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:06.658017+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023290 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fbeb4000/0x0/0x4ffc00000, data 0x10b9b8f/0x1174000, compress 0x0/0x0/0x0, omap 0x10659, meta 0x2bbf9a7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 24698880 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:07.658235+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 24698880 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:08.658418+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fbeb4000/0x0/0x4ffc00000, data 0x10b9b8f/0x1174000, compress 0x0/0x0/0x0, omap 0x10659, meta 0x2bbf9a7), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 24698880 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:09.658644+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 24690688 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:10.658798+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 24690688 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:11.659014+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023290 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 24690688 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.607306480s of 12.865227699s, submitted: 43
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:12.659197+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb3000/0x0/0x4ffc00000, data 0x10bb60e/0x1177000, compress 0x0/0x0/0x0, omap 0x1093b, meta 0x2bbf6c5), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:13.659382+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:14.659593+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:15.659779+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:16.659964+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025456 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:17.660202+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:18.660393+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb3000/0x0/0x4ffc00000, data 0x10bb60e/0x1177000, compress 0x0/0x0/0x0, omap 0x1093b, meta 0x2bbf6c5), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:19.660596+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:20.660793+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb3000/0x0/0x4ffc00000, data 0x10bb60e/0x1177000, compress 0x0/0x0/0x0, omap 0x1093b, meta 0x2bbf6c5), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:21.660992+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025456 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb3000/0x0/0x4ffc00000, data 0x10bb60e/0x1177000, compress 0x0/0x0/0x0, omap 0x1093b, meta 0x2bbf6c5), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:22.661192+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 24682496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:23.661336+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: handle_auth_request added challenge on 0x564373ac8000
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.464639664s of 11.474132538s, submitted: 13
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 24551424 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:24.661532+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 24551424 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:25.661732+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Got map version 10
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 24633344 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:26.661909+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb2000/0x0/0x4ffc00000, data 0x10bb6a9/0x1178000, compress 0x0/0x0/0x0, omap 0x1093b, meta 0x2bbf6c5), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028120 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 24600576 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:27.662128+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 24600576 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:28.662281+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 24600576 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb2000/0x0/0x4ffc00000, data 0x10bb7df/0x117a000, compress 0x0/0x0/0x0, omap 0x1093b, meta 0x2bbf6c5), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:29.662464+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 24600576 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:30.662886+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 24600576 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:31.663059+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029668 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb2000/0x0/0x4ffc00000, data 0x10bb7df/0x117a000, compress 0x0/0x0/0x0, omap 0x1093b, meta 0x2bbf6c5), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 24600576 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:32.663207+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 24600576 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb2000/0x0/0x4ffc00000, data 0x10bb7df/0x117a000, compress 0x0/0x0/0x0, omap 0x1093b, meta 0x2bbf6c5), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:33.663393+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Got map version 11
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb2000/0x0/0x4ffc00000, data 0x10bb7df/0x117a000, compress 0x0/0x0/0x0, omap 0x1093b, meta 0x2bbf6c5), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 24535040 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:34.663669+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 24535040 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:35.663926+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: handle_auth_request added challenge on 0x56437445ec00
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.099026680s of 12.106583595s, submitted: 3
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 123 handle_osd_map epochs [123,124], i have 124, src has [1,124]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 24403968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:36.664101+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1031326 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 24403968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:37.664679+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 24403968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:38.664867+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fbeaf000/0x0/0x4ffc00000, data 0x10bd2ae/0x117b000, compress 0x0/0x0/0x0, omap 0x10bc6, meta 0x2bbf43a), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 24403968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:39.665000+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 24403968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:40.665136+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 24395776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:41.665266+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033510 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 24395776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:42.665411+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 24395776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:43.665593+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 24395776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:44.665754+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbeab000/0x0/0x4ffc00000, data 0x10bedc8/0x117f000, compress 0x0/0x0/0x0, omap 0x10eaa, meta 0x2bbf156), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbeab000/0x0/0x4ffc00000, data 0x10bedc8/0x117f000, compress 0x0/0x0/0x0, omap 0x10eaa, meta 0x2bbf156), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 24395776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:45.665979+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 24395776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:46.666142+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035202 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 24395776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:47.666309+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbeab000/0x0/0x4ffc00000, data 0x10bedc8/0x117f000, compress 0x0/0x0/0x0, omap 0x10eaa, meta 0x2bbf156), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 24395776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:48.666450+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 24395776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:49.666660+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.533925056s of 13.742151260s, submitted: 39
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 24387584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:50.666832+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 24387584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:51.666961+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033764 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 24387584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:52.667120+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 24387584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:53.667254+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbead000/0x0/0x4ffc00000, data 0x10bed2d/0x117e000, compress 0x0/0x0/0x0, omap 0x10eaa, meta 0x2bbf156), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 24387584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:54.667417+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24379392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:55.667608+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24379392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:56.667840+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbead000/0x0/0x4ffc00000, data 0x10bed2d/0x117e000, compress 0x0/0x0/0x0, omap 0x10eaa, meta 0x2bbf156), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033764 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24379392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:57.668012+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24379392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:58.668170+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbead000/0x0/0x4ffc00000, data 0x10bed2d/0x117e000, compress 0x0/0x0/0x0, omap 0x10eaa, meta 0x2bbf156), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24379392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:59.668353+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24379392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:00.668613+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbead000/0x0/0x4ffc00000, data 0x10bed2d/0x117e000, compress 0x0/0x0/0x0, omap 0x10eaa, meta 0x2bbf156), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24379392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:01.668794+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033764 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24379392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:02.668930+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24379392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:03.669180+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.992928505s of 13.995832443s, submitted: 1
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbead000/0x0/0x4ffc00000, data 0x10bed2d/0x117e000, compress 0x0/0x0/0x0, omap 0x10eaa, meta 0x2bbf156), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:04.669388+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:05.669622+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:06.669814+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035694 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:07.669963+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbeaa000/0x0/0x4ffc00000, data 0x10c0897/0x1180000, compress 0x0/0x0/0x0, omap 0x11135, meta 0x2bbeecb), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:08.670150+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:09.670311+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:10.670531+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:11.670781+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038468 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:12.670960+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:13.671121+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:14.671275+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:15.671455+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:16.671631+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038468 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:17.671784+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:18.671950+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:19.672163+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:20.672317+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:21.672500+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038468 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:22.672683+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:23.672892+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:24.673203+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:25.673398+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:26.673725+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038468 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:27.673847+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:28.673995+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:29.674178+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:30.674412+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:31.674650+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038468 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:32.674818+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:33.674988+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c2316/0x1183000, compress 0x0/0x0/0x0, omap 0x11432, meta 0x2bbebce), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:34.675175+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:35.675335+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 24371200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 127 handle_osd_map epochs [127,128], i have 128, src has [1,128]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.546075821s of 32.449630737s, submitted: 38
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:36.675505+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041242 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fbea4000/0x0/0x4ffc00000, data 0x10c3f7b/0x1186000, compress 0x0/0x0/0x0, omap 0x116bd, meta 0x2bbe943), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:37.675676+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:38.675844+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fbe9f000/0x0/0x4ffc00000, data 0x10c5ba0/0x1189000, compress 0x0/0x0/0x0, omap 0x11948, meta 0x2bbe6b8), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:39.676046+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:40.676201+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:41.676352+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044480 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:42.676517+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:43.676667+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:44.676805+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fbea2000/0x0/0x4ffc00000, data 0x10c5c3b/0x118a000, compress 0x0/0x0/0x0, omap 0x11948, meta 0x2bbe6b8), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:45.676968+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 24363008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 129 handle_osd_map epochs [129,130], i have 130, src has [1,130]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.761423111s of 10.037869453s, submitted: 73
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:46.677109+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 23314432 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046790 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:47.677262+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 23314432 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fbe9e000/0x0/0x4ffc00000, data 0x10c77e5/0x118c000, compress 0x0/0x0/0x0, omap 0x11bd3, meta 0x2bbe42d), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 130 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:48.677406+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 23273472 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:49.677697+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 23273472 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:50.677835+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 23248896 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fbe9a000/0x0/0x4ffc00000, data 0x10c952d/0x1190000, compress 0x0/0x0/0x0, omap 0x11f55, meta 0x2bbe0ab), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:51.677951+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 23248896 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054988 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:52.678115+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 23240704 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:53.678264+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 23224320 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fbe92000/0x0/0x4ffc00000, data 0x10ccdff/0x1196000, compress 0x0/0x0/0x0, omap 0x12557, meta 0x2bbdaa9), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:54.678390+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:55.678678+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fbe92000/0x0/0x4ffc00000, data 0x10ccdff/0x1196000, compress 0x0/0x0/0x0, omap 0x12557, meta 0x2bbdaa9), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:56.678828+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058226 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:57.678960+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fbe92000/0x0/0x4ffc00000, data 0x10ccdff/0x1196000, compress 0x0/0x0/0x0, omap 0x12557, meta 0x2bbdaa9), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.772963524s of 12.161247253s, submitted: 121
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:58.679113+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 23175168 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:59.679319+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 23175168 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:00.679429+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 23175168 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fbe91000/0x0/0x4ffc00000, data 0x10cce9a/0x1197000, compress 0x0/0x0/0x0, omap 0x12557, meta 0x2bbdaa9), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:01.679645+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061940 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:02.679801+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe90000/0x0/0x4ffc00000, data 0x10ce959/0x119a000, compress 0x0/0x0/0x0, omap 0x12974, meta 0x2bbd68c), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:03.679978+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:04.680158+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:05.680359+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:06.680716+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061940 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:07.680879+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:08.681031+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe90000/0x0/0x4ffc00000, data 0x10ce959/0x119a000, compress 0x0/0x0/0x0, omap 0x12974, meta 0x2bbd68c), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:09.681172+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:10.681359+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:11.681599+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe90000/0x0/0x4ffc00000, data 0x10ce959/0x119a000, compress 0x0/0x0/0x0, omap 0x12974, meta 0x2bbd68c), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061940 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:12.681798+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:13.681971+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.065046310s of 16.300659180s, submitted: 13
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:14.682119+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:15.682278+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe8f000/0x0/0x4ffc00000, data 0x10ce9f4/0x119b000, compress 0x0/0x0/0x0, omap 0x12974, meta 0x2bbd68c), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:16.682436+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063632 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:17.682640+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:18.682849+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:19.683036+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:20.683180+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:21.683307+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe8f000/0x0/0x4ffc00000, data 0x10ceb2a/0x119d000, compress 0x0/0x0/0x0, omap 0x12974, meta 0x2bbd68c), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe8f000/0x0/0x4ffc00000, data 0x10ceb2a/0x119d000, compress 0x0/0x0/0x0, omap 0x12974, meta 0x2bbd68c), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063726 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:22.683447+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe91000/0x0/0x4ffc00000, data 0x10ce9f4/0x119b000, compress 0x0/0x0/0x0, omap 0x12974, meta 0x2bbd68c), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:23.683604+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:24.683741+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:25.683911+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:26.684053+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.181322098s of 12.437225342s, submitted: 5
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067220 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:27.684221+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fbe8c000/0x0/0x4ffc00000, data 0x10d05f9/0x119e000, compress 0x0/0x0/0x0, omap 0x12bff, meta 0x2bbd401), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:28.684451+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 23150592 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:29.684646+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 23150592 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:30.684841+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 23150592 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:31.684987+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069404 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:32.685157+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe89000/0x0/0x4ffc00000, data 0x10d2052/0x11a1000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:33.685301+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:34.685473+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:35.685648+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe89000/0x0/0x4ffc00000, data 0x10d2052/0x11a1000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:36.685816+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071096 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:37.685998+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:38.686129+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:39.686257+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:40.686387+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.552777290s of 13.626544952s, submitted: 38
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe89000/0x0/0x4ffc00000, data 0x10d2052/0x11a1000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:41.686546+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070392 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:42.686745+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:43.686873+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8a000/0x0/0x4ffc00000, data 0x10d200d/0x11a1000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:44.687025+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:45.687209+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 23207936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8a000/0x0/0x4ffc00000, data 0x10d200d/0x11a1000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:46.687360+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 23191552 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072084 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:47.687464+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe89000/0x0/0x4ffc00000, data 0x10d20a8/0x11a2000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 23191552 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:48.687622+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:49.687810+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 23166976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:50.687987+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:51.688109+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073616 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:52.688253+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.846646309s of 12.063707352s, submitted: 6
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:53.688455+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe89000/0x0/0x4ffc00000, data 0x10d2143/0x11a3000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:54.688649+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:55.688820+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:56.689032+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071780 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:57.689240+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8a000/0x0/0x4ffc00000, data 0x10d20a8/0x11a2000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:58.689412+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:59.689576+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:00.689783+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 23158784 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:01.689975+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8a000/0x0/0x4ffc00000, data 0x10d20a8/0x11a2000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 23150592 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071780 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:02.690205+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.153181076s of 10.157307625s, submitted: 2
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 23150592 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:03.690382+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 23150592 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:04.690528+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8a000/0x0/0x4ffc00000, data 0x10d20a8/0x11a2000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 23150592 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:05.690717+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8a000/0x0/0x4ffc00000, data 0x10d20a8/0x11a2000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 23150592 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:06.690913+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 23150592 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072882 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8a000/0x0/0x4ffc00000, data 0x10d20a8/0x11a2000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:07.691041+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 23142400 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:08.691194+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 23142400 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:09.691420+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 23142400 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:10.691618+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 23134208 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:11.691795+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 23134208 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072148 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:12.691966+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 23134208 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8b000/0x0/0x4ffc00000, data 0x10d200d/0x11a1000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:13.692158+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 23134208 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8b000/0x0/0x4ffc00000, data 0x10d200d/0x11a1000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:14.692286+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: handle_auth_request added challenge on 0x56437673bc00
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.712426186s of 11.808559418s, submitted: 5
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 22986752 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: handle_auth_request added challenge on 0x56437673b000
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:15.692476+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Got map version 12
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 22790144 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:16.692614+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 22790144 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076490 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:17.692738+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 22790144 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:18.692924+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 22781952 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:19.693107+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe8b000/0x0/0x4ffc00000, data 0x10d1fdd/0x11a0000, compress 0x0/0x0/0x0, omap 0x12f02, meta 0x2bbd0fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 22781952 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:20.693281+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 22781952 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:21.693435+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 22781952 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078516 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:22.693586+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 22781952 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:23.693759+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe88000/0x0/0x4ffc00000, data 0x10d3b47/0x11a2000, compress 0x0/0x0/0x0, omap 0x1318d, meta 0x2bbce73), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:24.693950+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe88000/0x0/0x4ffc00000, data 0x10d3b47/0x11a2000, compress 0x0/0x0/0x0, omap 0x1318d, meta 0x2bbce73), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:25.694132+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:26.694277+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe88000/0x0/0x4ffc00000, data 0x10d3b47/0x11a2000, compress 0x0/0x0/0x0, omap 0x1318d, meta 0x2bbce73), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077336 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:27.694420+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:28.694581+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.803647995s of 14.063410759s, submitted: 34
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:29.694716+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:30.694879+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 137 handle_osd_map epochs [137,138], i have 138, src has [1,138]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:31.695031+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081212 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:32.695219+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe86000/0x0/0x4ffc00000, data 0x10d552b/0x11a4000, compress 0x0/0x0/0x0, omap 0x13487, meta 0x2bbcb79), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:33.695423+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:34.695613+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe85000/0x0/0x4ffc00000, data 0x10d55c6/0x11a5000, compress 0x0/0x0/0x0, omap 0x13487, meta 0x2bbcb79), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:35.695793+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 22732800 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:36.696036+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 22724608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe85000/0x0/0x4ffc00000, data 0x10d5661/0x11a6000, compress 0x0/0x0/0x0, omap 0x13487, meta 0x2bbcb79), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083014 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:37.696224+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 22724608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:38.696437+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 22724608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:39.696647+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 22724608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe85000/0x0/0x4ffc00000, data 0x10d5661/0x11a6000, compress 0x0/0x0/0x0, omap 0x13487, meta 0x2bbcb79), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:40.696928+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.989875793s of 12.011968613s, submitted: 18
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 22863872 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:41.697204+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 22913024 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085534 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:42.697397+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fbe82000/0x0/0x4ffc00000, data 0x10d71fb/0x11a8000, compress 0x0/0x0/0x0, omap 0x13712, meta 0x2bbc8ee), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 22904832 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:43.697640+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 22904832 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:44.697852+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 22904832 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:45.698078+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 22904832 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:46.698231+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 22888448 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086026 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:47.698423+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 22888448 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:48.698596+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe80000/0x0/0x4ffc00000, data 0x10d8d75/0x11aa000, compress 0x0/0x0/0x0, omap 0x1399d, meta 0x2bbc663), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 22872064 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:49.698793+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 22872064 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:50.698971+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 22872064 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:51.699115+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 22872064 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.631030083s of 11.823550224s, submitted: 51
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088800 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:52.699293+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 22863872 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:53.699475+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fbe7d000/0x0/0x4ffc00000, data 0x10da830/0x11ad000, compress 0x0/0x0/0x0, omap 0x13d1c, meta 0x2bbc2e4), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 22863872 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:54.699682+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 22839296 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:55.699913+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 22839296 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7d000/0x0/0x4ffc00000, data 0x10da8f9/0x11ae000, compress 0x0/0x0/0x0, omap 0x13d1c, meta 0x2bbc2e4), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:56.700176+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 22839296 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092546 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:57.700379+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 22831104 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:58.700544+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc376/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 22831104 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:59.700778+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 22822912 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:00.700912+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:01.701083+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:02.701298+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094110 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.159491539s of 10.234383583s, submitted: 34
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe79000/0x0/0x4ffc00000, data 0x10dc43e/0x11b2000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:03.701685+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:04.701827+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:05.702016+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:06.702181+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:07.702346+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093504 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc2af/0x11b0000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:08.702480+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:09.702675+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:10.703072+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:11.703306+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:12.703508+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093504 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc34a/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:13.703621+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:14.703755+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 22798336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc34a/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.524249077s of 12.662443161s, submitted: 5
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:15.703982+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 22781952 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:16.704165+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 22781952 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:17.704427+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095068 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 22781952 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:18.704625+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe79000/0x0/0x4ffc00000, data 0x10dc412/0x11b2000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:19.704810+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 22773760 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:20.704951+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 22765568 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:21.705127+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10dc34a/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10dc34a/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 22749184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:22.705271+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094334 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 22749184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:23.705501+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 22749184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10dc34a/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:24.705711+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 22749184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:25.705973+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.469438553s of 10.483627319s, submitted: 6
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe79000/0x0/0x4ffc00000, data 0x10dc413/0x11b2000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 22749184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:26.706146+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 22749184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:27.706287+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095068 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 22749184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:28.706478+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 22749184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:29.706630+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 22749184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 ms_handle_reset con 0x56437673bc00 session 0x564375444e00
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 ms_handle_reset con 0x56437673b000 session 0x5643748f7a40
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:30.706772+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 22413312 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:31.706958+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10dc413/0x11b2000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 22413312 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Got map version 13
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:32.707124+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094908 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 22347776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:33.707264+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10dc413/0x11b2000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 22347776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:34.707399+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 22347776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:35.707815+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 22347776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:36.708087+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10dc413/0x11b2000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 22347776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:37.708226+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094908 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 22347776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:38.708315+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 22347776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:39.708713+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.680962563s of 13.837501526s, submitted: 183
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:40.708847+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:41.709007+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc378/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:42.709171+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094318 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:43.709255+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:44.709372+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc378/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:45.709526+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:46.709663+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:47.709841+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094318 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:48.710011+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc378/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:49.710170+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc378/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.737087250s of 10.740295410s, submitted: 2
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:50.710243+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:51.710364+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc376/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:52.710541+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093744 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:53.710694+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:54.710829+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:55.711005+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0x10dc2af/0x11b0000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:56.711144+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:57.711289+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093728 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:58.711437+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0x10dc2af/0x11b0000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:59.711625+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 22339584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:00.711813+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.274223328s of 10.329898834s, submitted: 4
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 22323200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:01.712089+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10dc377/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 22323200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:02.712205+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095436 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 22323200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:03.712343+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 22323200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10dc377/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:04.712526+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 22315008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:05.712793+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10dc377/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10dc375/0x11b1000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 22249472 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:06.712951+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 22249472 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:07.713095+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096896 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 22249472 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:08.713220+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe48000/0x0/0x4ffc00000, data 0x110fa49/0x11e4000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 80666624 unmapped: 19922944 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:09.713368+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 19726336 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:10.713678+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.950484276s of 10.159562111s, submitted: 31
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 19972096 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:11.713833+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 19972096 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:12.713947+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102510 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe30000/0x0/0x4ffc00000, data 0x11269ac/0x11fc000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x2bbbfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 19750912 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:13.714085+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 19595264 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:14.714189+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 19595264 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:15.714365+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 19038208 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:16.714531+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 19038208 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:17.714698+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 16670720 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:18.714850+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fabf8000/0x0/0x4ffc00000, data 0x11bfae8/0x1294000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 16359424 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:19.714964+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 16351232 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:20.715112+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.725925446s of 10.040389061s, submitted: 39
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 16449536 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:21.715259+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 16498688 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fabd8000/0x0/0x4ffc00000, data 0x11df53a/0x12b4000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [0,0,0,2])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:22.715385+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115474 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 16490496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:23.715524+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 16490496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:24.715675+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 16244736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:25.715860+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 16392192 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:26.716023+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fab76000/0x0/0x4ffc00000, data 0x1240927/0x1316000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 16392192 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:27.716148+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114086 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 16187392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:28.716278+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 16187392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:29.716418+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 15089664 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:30.716651+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fab32000/0x0/0x4ffc00000, data 0x128579d/0x135a000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,3])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.163474083s of 10.421551704s, submitted: 53
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 14729216 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:31.716945+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fab32000/0x0/0x4ffc00000, data 0x128579d/0x135a000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 15384576 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:32.717171+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119124 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fab27000/0x0/0x4ffc00000, data 0x129078c/0x1365000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 15278080 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:33.717379+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 15425536 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:34.717722+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4faaf0000/0x0/0x4ffc00000, data 0x12c741b/0x139c000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4faaf0000/0x0/0x4ffc00000, data 0x12c741b/0x139c000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 15425536 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:35.717994+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 15417344 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:36.718166+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 15237120 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:37.718358+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119860 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 15187968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:38.718611+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 15187968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:39.718794+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 14974976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:40.719085+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4faab8000/0x0/0x4ffc00000, data 0x12ffaf2/0x13d4000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 87187456 unmapped: 13402112 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:41.719346+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.934916496s of 10.981097221s, submitted: 51
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 87187456 unmapped: 13402112 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:42.719596+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127704 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 13131776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:43.719798+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 87072768 unmapped: 13516800 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:44.720012+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 87072768 unmapped: 13516800 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:45.720213+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4faa47000/0x0/0x4ffc00000, data 0x137036b/0x1445000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 13664256 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:46.720383+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 13664256 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:47.720523+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135400 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 87097344 unmapped: 13492224 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:48.720695+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa9f2000/0x0/0x4ffc00000, data 0x13c3c2f/0x1499000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 13647872 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:49.720872+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 13647872 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:50.721059+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 12394496 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:51.721219+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 12115968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:52.721399+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa9a2000/0x0/0x4ffc00000, data 0x14151a3/0x14ea000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139232 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 12115968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:53.721522+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 12115968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:54.721710+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.313160896s of 12.612329483s, submitted: 62
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 87998464 unmapped: 12591104 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:55.721918+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 12509184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:56.722079+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 12509184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:57.722228+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138590 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 12263424 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:58.722360+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa96f000/0x0/0x4ffc00000, data 0x1448b04/0x151d000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 12263424 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:59.722613+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa950000/0x0/0x4ffc00000, data 0x14675ca/0x153c000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 12263424 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:00.722764+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 7449 writes, 29K keys, 7449 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7448 writes, 1611 syncs, 4.62 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1785 writes, 5185 keys, 1785 commit groups, 1.0 writes per commit group, ingest: 6.20 MB, 0.01 MB/s
                                           Interval WAL: 1784 writes, 696 syncs, 2.56 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88489984 unmapped: 12099584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:01.722921+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 12058624 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:02.723114+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144750 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88604672 unmapped: 11984896 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:03.723292+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa913000/0x0/0x4ffc00000, data 0x14a2849/0x1578000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 11837440 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:04.723417+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.493882179s of 10.143390656s, submitted: 48
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 12263424 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:05.723673+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89169920 unmapped: 11419648 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:06.723835+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:07.723992+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 11337728 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144574 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:08.724132+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 11337728 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc ms_handle_reset ms_handle_reset con 0x564373ac8400
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4161918394
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: get_auth_request con 0x564376671800 auth_method 0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc handle_mgr_configure stats_period=5
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:09.724332+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89382912 unmapped: 11206656 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa8da000/0x0/0x4ffc00000, data 0x14dc410/0x15b2000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:10.724533+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:11.724724+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:12.731632+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147494 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa8d8000/0x0/0x4ffc00000, data 0x14dc51b/0x15b4000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:13.731829+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:14.731968+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.332037926s of 10.297814369s, submitted: 22
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:15.732193+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:16.732320+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa8d8000/0x0/0x4ffc00000, data 0x14dc547/0x15b4000, compress 0x0/0x0/0x0, omap 0x14021, meta 0x3d5bfdf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 142 handle_osd_map epochs [142,143], i have 143, src has [1,143]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:17.732465+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151418 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:18.732682+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:19.732842+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:20.733064+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa8d2000/0x0/0x4ffc00000, data 0x14de14e/0x15b7000, compress 0x0/0x0/0x0, omap 0x142ac, meta 0x3d5bd54), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:21.733211+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:22.733348+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150108 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:23.733509+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa8d6000/0x0/0x4ffc00000, data 0x14de0b1/0x15b6000, compress 0x0/0x0/0x0, omap 0x142ac, meta 0x3d5bd54), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:24.733708+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:25.733893+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.173578262s of 10.346817970s, submitted: 30
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:26.734022+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:27.734170+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151320 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:28.734291+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:29.734421+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d3000/0x0/0x4ffc00000, data 0x14df9ce/0x15b7000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:30.734639+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 11083776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:31.734793+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89513984 unmapped: 11075584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:32.734922+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89513984 unmapped: 11075584 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d2000/0x0/0x4ffc00000, data 0x14dfb0c/0x15b9000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154000 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:33.735056+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 11059200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:34.735194+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 11051008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d2000/0x0/0x4ffc00000, data 0x14dfb0c/0x15b9000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:35.735386+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 11051008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:36.735622+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.757122040s of 10.806098938s, submitted: 20
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89563136 unmapped: 11026432 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:37.735843+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 11124736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d2000/0x0/0x4ffc00000, data 0x14dfac5/0x15b9000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154000 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:38.736248+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 11124736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:39.736439+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 11124736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:40.736639+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 11124736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:41.736805+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 11124736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d4000/0x0/0x4ffc00000, data 0x14dfa69/0x15b8000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:42.736993+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 11124736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153250 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:43.737159+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 11124736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:44.737317+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 11124736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d1000/0x0/0x4ffc00000, data 0x14dfba7/0x15ba000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: handle_auth_request added challenge on 0x5643756ef400
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:45.737541+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89481216 unmapped: 11108352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:46.737783+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89481216 unmapped: 11108352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Got map version 14
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:47.737973+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 11091968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.172966957s of 11.213060379s, submitted: 14
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157480 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:48.738137+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 11091968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:49.738289+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:50.738485+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d1000/0x0/0x4ffc00000, data 0x14dfb60/0x15ba000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:51.738620+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:52.738837+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154480 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:53.739009+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:54.739165+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:55.739363+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d2000/0x0/0x4ffc00000, data 0x14df9ce/0x15b7000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:56.739545+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:57.739782+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.465863228s of 10.024806023s, submitted: 10
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155166 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:58.739920+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 11100160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d4000/0x0/0x4ffc00000, data 0x14dfa69/0x15b8000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:59.740050+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89522176 unmapped: 11067392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:00.740192+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89522176 unmapped: 11067392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:01.740359+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89440256 unmapped: 11149312 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:02.743619+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89440256 unmapped: 11149312 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155166 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:03.743788+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89448448 unmapped: 11141120 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d4000/0x0/0x4ffc00000, data 0x14dfa69/0x15b8000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:04.743923+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 11059200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:05.744084+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 11059200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:06.744234+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89554944 unmapped: 11034624 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:07.744366+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d5000/0x0/0x4ffc00000, data 0x14df9ce/0x15b7000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [0,0,0,0,0,1])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89563136 unmapped: 11026432 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.508973122s of 10.182012558s, submitted: 58
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153474 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:08.744523+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 11001856 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:09.744698+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 11001856 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:10.744843+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 11001856 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:11.745021+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90480640 unmapped: 10108928 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:12.745127+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90480640 unmapped: 10108928 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153474 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d5000/0x0/0x4ffc00000, data 0x14df9ce/0x15b7000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:13.745276+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90488832 unmapped: 10100736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:14.745432+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90488832 unmapped: 10100736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:15.745609+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90488832 unmapped: 10100736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:16.745763+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90488832 unmapped: 10100736 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:17.745930+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 10092544 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d5000/0x0/0x4ffc00000, data 0x14df9ce/0x15b7000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153474 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:18.746092+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 10092544 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:19.746265+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 10092544 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:20.746398+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 10092544 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:21.746526+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 10092544 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:22.746663+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 10092544 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.560441017s of 15.103803635s, submitted: 58
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152884 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d6000/0x0/0x4ffc00000, data 0x14df933/0x15b6000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:23.746800+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 10092544 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:24.746954+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 10092544 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:25.747116+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:26.747254+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:27.747400+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152884 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:28.747610+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d6000/0x0/0x4ffc00000, data 0x14df933/0x15b6000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:29.747780+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:30.747924+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:31.748083+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:32.748223+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152884 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d6000/0x0/0x4ffc00000, data 0x14df933/0x15b6000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:33.748723+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:34.748879+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 10084352 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.775199890s of 11.934508324s, submitted: 1
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:35.749052+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa8d6000/0x0/0x4ffc00000, data 0x14df933/0x15b6000, compress 0x0/0x0/0x0, omap 0x145b1, meta 0x3d5ba4f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 10076160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:36.749218+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 10076160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:37.749399+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 10076160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156378 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:38.749674+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa8d1000/0x0/0x4ffc00000, data 0x14e1538/0x15b9000, compress 0x0/0x0/0x0, omap 0x1483c, meta 0x3d5b7c4), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 10076160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa8d1000/0x0/0x4ffc00000, data 0x14e1538/0x15b9000, compress 0x0/0x0/0x0, omap 0x1483c, meta 0x3d5b7c4), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:39.749865+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 10076160 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:40.750063+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 10067968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:41.750198+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 10067968 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:42.750406+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa8d1000/0x0/0x4ffc00000, data 0x14e1538/0x15b9000, compress 0x0/0x0/0x0, omap 0x1483c, meta 0x3d5b7c4), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159152 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:43.750638+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:44.750794+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa8ce000/0x0/0x4ffc00000, data 0x14e2fb7/0x15bc000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x3d5b482), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:45.750977+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.054538727s of 11.160003662s, submitted: 37
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:46.751177+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:47.751359+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160844 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:48.751551+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa8cd000/0x0/0x4ffc00000, data 0x14e3052/0x15bd000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x3d5b482), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:49.751755+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:50.751906+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:51.752096+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:52.752256+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160844 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:53.752436+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa8cd000/0x0/0x4ffc00000, data 0x14e3052/0x15bd000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x3d5b482), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:54.752669+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 10059776 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:55.752919+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: handle_auth_request added challenge on 0x56437673b800
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 10043392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:56.753103+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 10043392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:57.753261+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.760693550s of 11.774554253s, submitted: 5
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 10043392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165734 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:58.753404+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 10043392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:59.753639+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa8cd000/0x0/0x4ffc00000, data 0x14e325d/0x15bf000, compress 0x0/0x0/0x0, omap 0x14c1c, meta 0x3d5b3e4), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 10043392 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:00.753830+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90554368 unmapped: 10035200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:01.753981+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90554368 unmapped: 10035200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa8cc000/0x0/0x4ffc00000, data 0x14e32f8/0x15c0000, compress 0x0/0x0/0x0, omap 0x14c1c, meta 0x3d5b3e4), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:02.754139+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90554368 unmapped: 10035200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165144 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:03.754301+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa8cd000/0x0/0x4ffc00000, data 0x14e30ed/0x15be000, compress 0x0/0x0/0x0, omap 0x14c1c, meta 0x3d5b3e4), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90554368 unmapped: 10035200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa8cd000/0x0/0x4ffc00000, data 0x14e30ed/0x15be000, compress 0x0/0x0/0x0, omap 0x14c1c, meta 0x3d5b3e4), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:04.754508+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90554368 unmapped: 10035200 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:05.754747+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90562560 unmapped: 10027008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa8cd000/0x0/0x4ffc00000, data 0x14e3052/0x15bd000, compress 0x0/0x0/0x0, omap 0x14c1c, meta 0x3d5b3e4), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:06.754868+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90562560 unmapped: 10027008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:07.755008+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90562560 unmapped: 10027008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1164426 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:08.755162+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90562560 unmapped: 10027008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:09.755298+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90562560 unmapped: 10027008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:10.755444+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa8cd000/0x0/0x4ffc00000, data 0x14e3052/0x15bd000, compress 0x0/0x0/0x0, omap 0x14c1c, meta 0x3d5b3e4), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90562560 unmapped: 10027008 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:11.755642+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.751629829s of 13.765338898s, submitted: 5
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90570752 unmapped: 10018816 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:12.755810+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90570752 unmapped: 10018816 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166946 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:13.755970+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90570752 unmapped: 10018816 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:14.756103+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90570752 unmapped: 10018816 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:15.756262+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa8ca000/0x0/0x4ffc00000, data 0x14e4c57/0x15c0000, compress 0x0/0x0/0x0, omap 0x14ea7, meta 0x3d5b159), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90570752 unmapped: 10018816 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:16.756441+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90570752 unmapped: 10018816 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:17.756610+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90570752 unmapped: 10018816 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 147 handle_osd_map epochs [147,148], i have 148, src has [1,148]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169720 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:18.756789+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa8ca000/0x0/0x4ffc00000, data 0x14e4c57/0x15c0000, compress 0x0/0x0/0x0, omap 0x14ea7, meta 0x3d5b159), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90578944 unmapped: 10010624 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa8c7000/0x0/0x4ffc00000, data 0x14e66d6/0x15c3000, compress 0x0/0x0/0x0, omap 0x15170, meta 0x3d5ae90), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:19.756979+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90578944 unmapped: 10010624 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:20.757157+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90578944 unmapped: 10010624 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:21.758178+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90578944 unmapped: 10010624 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:22.758405+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90587136 unmapped: 10002432 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171412 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:23.759391+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa8c6000/0x0/0x4ffc00000, data 0x14e6771/0x15c4000, compress 0x0/0x0/0x0, omap 0x15170, meta 0x3d5ae90), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90587136 unmapped: 10002432 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:24.759543+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90587136 unmapped: 10002432 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:25.760396+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90587136 unmapped: 10002432 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 148 handle_osd_map epochs [148,149], i have 149, src has [1,149]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.325739861s of 14.450867653s, submitted: 36
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:26.760663+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90603520 unmapped: 9986048 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:27.760957+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90603520 unmapped: 9986048 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:28.761174+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1175144 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa8c4000/0x0/0x4ffc00000, data 0x14e82db/0x15c6000, compress 0x0/0x0/0x0, omap 0x153fb, meta 0x3d5ac05), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90603520 unmapped: 9986048 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:29.761412+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90603520 unmapped: 9986048 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:30.761665+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa8c3000/0x0/0x4ffc00000, data 0x14e8376/0x15c7000, compress 0x0/0x0/0x0, omap 0x153fb, meta 0x3d5ac05), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90603520 unmapped: 9986048 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:31.761905+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90603520 unmapped: 9986048 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:32.762185+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 149 handle_osd_map epochs [149,150], i have 150, src has [1,150]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90619904 unmapped: 9969664 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:33.762365+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177774 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90619904 unmapped: 9969664 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0x14e9df5/0x15ca000, compress 0x0/0x0/0x0, omap 0x15702, meta 0x3d5a8fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:34.762487+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90628096 unmapped: 9961472 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:35.762715+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90636288 unmapped: 9953280 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:36.762973+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90636288 unmapped: 9953280 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:37.763202+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.823903084s of 11.909013748s, submitted: 42
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:38.763366+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179848 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8c1000/0x0/0x4ffc00000, data 0x14e9e90/0x15cb000, compress 0x0/0x0/0x0, omap 0x15702, meta 0x3d5a8fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:39.763601+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:40.763758+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8c1000/0x0/0x4ffc00000, data 0x14e9e90/0x15cb000, compress 0x0/0x0/0x0, omap 0x15702, meta 0x3d5a8fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:41.763964+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:42.764144+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8c2000/0x0/0x4ffc00000, data 0x14e9df5/0x15ca000, compress 0x0/0x0/0x0, omap 0x15702, meta 0x3d5a8fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:43.764295+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178012 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:44.764465+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:45.764660+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:46.764778+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 9945088 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8c1000/0x0/0x4ffc00000, data 0x14e9e90/0x15cb000, compress 0x0/0x0/0x0, omap 0x15702, meta 0x3d5a8fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:47.764950+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8c1000/0x0/0x4ffc00000, data 0x14e9e90/0x15cb000, compress 0x0/0x0/0x0, omap 0x15702, meta 0x3d5a8fe), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 8896512 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:48.765139+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181260 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8c1000/0x0/0x4ffc00000, data 0x14e9e90/0x15cb000, compress 0x0/0x0/0x0, omap 0x15752, meta 0x3d5a8ae), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 8896512 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:49.765286+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 8896512 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:50.765437+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.961210251s of 12.982180595s, submitted: 4
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91701248 unmapped: 8888320 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:51.765603+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91701248 unmapped: 8888320 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:52.765747+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa8bc000/0x0/0x4ffc00000, data 0x14eba95/0x15ce000, compress 0x0/0x0/0x0, omap 0x159dd, meta 0x3d5a623), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91701248 unmapped: 8888320 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:53.765873+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186302 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91701248 unmapped: 8888320 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:54.766082+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91701248 unmapped: 8888320 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa8bc000/0x0/0x4ffc00000, data 0x14ebbcb/0x15d0000, compress 0x0/0x0/0x0, omap 0x159dd, meta 0x3d5a623), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:55.766279+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91701248 unmapped: 8888320 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:56.766465+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa8bd000/0x0/0x4ffc00000, data 0x14ebb30/0x15cf000, compress 0x0/0x0/0x0, omap 0x159dd, meta 0x3d5a623), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91701248 unmapped: 8888320 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:57.766658+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91717632 unmapped: 8871936 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:58.766904+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191582 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91725824 unmapped: 8863744 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:59.767108+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa8b4000/0x0/0x4ffc00000, data 0x14ef24f/0x15d6000, compress 0x0/0x0/0x0, omap 0x15f31, meta 0x3d5a0cf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91734016 unmapped: 8855552 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:00.769606+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91734016 unmapped: 8855552 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:01.769726+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91734016 unmapped: 8855552 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:02.769855+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Got map version 15
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91734016 unmapped: 8855552 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:03.770187+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193622 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.605374336s of 12.818570137s, submitted: 65
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa8b5000/0x0/0x4ffc00000, data 0x14ef1b4/0x15d5000, compress 0x0/0x0/0x0, omap 0x15f31, meta 0x3d5a0cf), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91750400 unmapped: 8839168 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:04.770471+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91750400 unmapped: 8839168 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:05.770657+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91758592 unmapped: 8830976 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:06.770782+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 153 handle_osd_map epochs [153,154], i have 154, src has [1,154]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 8806400 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:07.770946+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 154 handle_osd_map epochs [155,156], i have 154, src has [1,156]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92864512 unmapped: 7725056 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:08.771139+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa8b4000/0x0/0x4ffc00000, data 0x14f0cb3/0x15d6000, compress 0x0/0x0/0x0, omap 0x161a4, meta 0x3d59e5c), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201660 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 7716864 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:09.771313+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 7716864 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:10.771457+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 7684096 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:11.771715+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa8af000/0x0/0x4ffc00000, data 0x14f4338/0x15db000, compress 0x0/0x0/0x0, omap 0x164ab, meta 0x3d59b55), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 7684096 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:12.771867+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:13.772063+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203412 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa8ac000/0x0/0x4ffc00000, data 0x14f5e17/0x15de000, compress 0x0/0x0/0x0, omap 0x16932, meta 0x3d596ce), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:14.772194+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:15.772806+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.035589218s of 12.326159477s, submitted: 104
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:16.772959+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:17.773098+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa8a9000/0x0/0x4ffc00000, data 0x14f7a4c/0x15e1000, compress 0x0/0x0/0x0, omap 0x16bbd, meta 0x3d59443), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:18.773299+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206186 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:19.773502+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:20.773691+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa8a9000/0x0/0x4ffc00000, data 0x14f7a4c/0x15e1000, compress 0x0/0x0/0x0, omap 0x16bbd, meta 0x3d59443), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:21.773885+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:22.774078+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:23.774277+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208960 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 7667712 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:24.774499+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 7659520 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:25.774717+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.055333138s of 10.117359161s, submitted: 72
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 7643136 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:26.774870+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa8a6000/0x0/0x4ffc00000, data 0x14f94eb/0x15e4000, compress 0x0/0x0/0x0, omap 0x16f44, meta 0x3d590bc), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 7643136 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:27.775053+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa8a3000/0x0/0x4ffc00000, data 0x14fb0f0/0x15e7000, compress 0x0/0x0/0x0, omap 0x171cf, meta 0x3d58e31), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 7643136 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:28.775231+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215118 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 7643136 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:29.775384+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 160 ms_handle_reset con 0x56437673b800 session 0x5643762d8a80
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 7643136 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:30.775607+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 160 ms_handle_reset con 0x5643756ef400 session 0x5643769e5180
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa8a1000/0x0/0x4ffc00000, data 0x14fb226/0x15e9000, compress 0x0/0x0/0x0, omap 0x171cf, meta 0x3d58e31), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93306880 unmapped: 7282688 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:31.775810+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93339648 unmapped: 7249920 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Got map version 16
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:32.775952+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 7430144 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:33.776152+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217620 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x14fe735/0x15ec000, compress 0x0/0x0/0x0, omap 0x1774b, meta 0x3d588b5), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 7430144 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:34.776316+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 7397376 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x14fe735/0x15ec000, compress 0x0/0x0/0x0, omap 0x1774b, meta 0x3d588b5), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:35.776526+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 7397376 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:36.776708+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 7397376 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:37.776845+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 7397376 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:38.777067+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217620 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 7397376 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:39.777238+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 7397376 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:40.777455+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x14fe735/0x15ec000, compress 0x0/0x0/0x0, omap 0x1774b, meta 0x3d588b5), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.240417480s of 14.388905525s, submitted: 267
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:41.777630+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:42.777786+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:43.777976+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219930 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:44.778118+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:45.778271+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:46.778422+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:47.778619+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:48.779006+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219930 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:49.779202+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:50.779400+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:51.779531+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:52.779653+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:53.779816+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219930 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:54.780137+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:55.780371+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:56.780556+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:57.780812+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:58.780991+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219930 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:59.781233+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:00.781432+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:01.781609+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:02.781791+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:03.781975+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219930 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:04.782137+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:05.782347+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:06.782484+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:07.782705+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:08.782867+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219930 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:09.783068+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:10.783285+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:11.783479+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:12.783779+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:13.783933+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219930 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:14.784078+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:15.784243+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:16.784384+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:17.784713+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:18.784874+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219930 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x15001d4/0x15ef000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:19.785058+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93200384 unmapped: 7389184 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:20.785244+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 40.604015350s of 40.622814178s, submitted: 12
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 163 handle_osd_map epochs [163,164], i have 164, src has [1,164]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:21.785392+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa898000/0x0/0x4ffc00000, data 0x1501dd9/0x15f2000, compress 0x0/0x0/0x0, omap 0x17c61, meta 0x3d5839f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:22.785554+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:23.785794+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222704 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:24.786020+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa898000/0x0/0x4ffc00000, data 0x1501dd9/0x15f2000, compress 0x0/0x0/0x0, omap 0x17c61, meta 0x3d5839f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:25.786295+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:26.786429+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:27.786642+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:28.786858+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222704 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:29.786992+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:30.787123+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa898000/0x0/0x4ffc00000, data 0x1501dd9/0x15f2000, compress 0x0/0x0/0x0, omap 0x17c61, meta 0x3d5839f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:31.787256+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:32.787423+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa898000/0x0/0x4ffc00000, data 0x1501dd9/0x15f2000, compress 0x0/0x0/0x0, omap 0x17c61, meta 0x3d5839f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:33.787689+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222704 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:34.787885+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:35.788173+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:36.788369+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa898000/0x0/0x4ffc00000, data 0x1501dd9/0x15f2000, compress 0x0/0x0/0x0, omap 0x17c61, meta 0x3d5839f), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:37.788601+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 164 handle_osd_map epochs [164,165], i have 165, src has [1,165]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.066982269s of 17.141559601s, submitted: 35
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:38.788729+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:39.788911+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:40.789161+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:41.789313+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:42.789455+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _renew_subs
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:43.789628+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:44.789800+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:45.789964+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:46.790153+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:47.790310+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:48.790472+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:49.790686+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:50.790834+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:51.791719+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:52.791884+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:53.792116+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:54.792271+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:55.792461+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:56.792603+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:57.792764+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:58.793331+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:59.793511+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:00.793629+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:01.793757+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:02.793953+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:03.794116+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:04.794251+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:05.794414+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:06.794671+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:07.794925+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:08.795055+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:09.795223+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:10.795556+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:11.795714+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:12.795844+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:13.795976+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:14.796168+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:15.796394+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:16.796544+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:17.796760+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:18.796931+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:19.797105+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:20.797265+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:21.797528+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:22.797682+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:23.797828+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:24.797962+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:25.798165+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:26.798344+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 7364608 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:27.798535+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 7356416 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:28.798723+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 7356416 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225478 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:29.798902+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 7356416 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:30.799036+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 52.169063568s of 52.178222656s, submitted: 12
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 ms_handle_reset con 0x564373ac8000 session 0x564376270380
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93519872 unmapped: 7069696 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:31.799180+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93519872 unmapped: 7069696 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:32.799334+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Got map version 17
Jan 23 19:20:33 compute-0 ceph-osd[88072]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93519872 unmapped: 7069696 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:33.799487+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93519872 unmapped: 7069696 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:34.799645+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93519872 unmapped: 7069696 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:35.799794+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 6938624 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'config diff' '{prefix=config diff}'
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:36.799916+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'config show' '{prefix=config show}'
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'counter dump' '{prefix=counter dump}'
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93757440 unmapped: 6832128 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'counter schema' '{prefix=counter schema}'
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:37.800065+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 93978624 unmapped: 6610944 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:38.800238+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 6324224 heap: 100589568 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'log dump' '{prefix=log dump}'
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:39.800357+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'perf dump' '{prefix=perf dump}'
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 17367040 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'perf schema' '{prefix=perf schema}'
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:40.800492+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:41.800677+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:42.800808+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:43.800928+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:44.801066+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:45.801213+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:46.801375+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:47.801529+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:48.801646+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:49.801822+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:50.801979+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:51.802112+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:52.802407+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:53.802617+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:54.802831+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:55.803091+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:56.803295+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:57.803476+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:58.803679+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:59.803830+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:00.803956+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:01.804142+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:02.804331+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:03.804535+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:04.804726+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:05.804984+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:06.805161+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:07.805385+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:08.805633+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 17219584 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:09.814631+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:10.814965+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:11.815306+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:12.815460+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:13.815614+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:14.815795+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:15.816325+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:16.816953+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:17.817659+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:18.818048+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:19.818389+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:20.818802+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:21.819032+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:22.819436+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:23.819657+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94437376 unmapped: 17195008 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:24.819964+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94437376 unmapped: 17195008 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:25.820142+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94437376 unmapped: 17195008 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:26.820274+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 17186816 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:27.820407+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 17186816 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:28.820539+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 17186816 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:29.820644+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:30.820773+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:31.820932+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:32.821064+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:33.821199+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:34.821317+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:35.821476+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:36.821621+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:37.821743+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:38.821869+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:39.821987+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:40.822105+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:41.822251+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:42.822382+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:43.822515+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:44.822633+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:45.822887+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:46.823189+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:47.823332+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:48.823495+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:49.823649+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:50.823775+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:51.823848+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:52.824002+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:53.824180+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:54.824318+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:55.824536+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:56.824699+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:57.824894+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:58.825026+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:59.825170+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:00.825312+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:01.825513+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:02.825639+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:03.825807+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:04.825948+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:05.826158+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:06.826589+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:07.826783+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:08.826985+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 17129472 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:09.827114+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 17129472 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:10.827288+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 17129472 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:11.827451+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 17129472 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:12.827622+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 17129472 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:13.827765+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 17129472 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:14.827939+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 17121280 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:15.828135+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 17121280 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:16.828380+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 17121280 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:17.828550+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 17121280 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:18.828783+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 17121280 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:19.828981+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 17121280 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:20.829152+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 17342464 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:21.829347+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 17342464 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:22.829668+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 17342464 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:23.829851+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 17326080 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:24.830005+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 17326080 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:25.830184+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 17326080 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:26.830351+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 17326080 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:27.830599+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94314496 unmapped: 17317888 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:28.830796+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94314496 unmapped: 17317888 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:29.830954+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94314496 unmapped: 17317888 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:30.831148+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94314496 unmapped: 17317888 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:31.831347+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94314496 unmapped: 17317888 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:32.831540+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94314496 unmapped: 17317888 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:33.832002+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94314496 unmapped: 17317888 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:34.832156+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94314496 unmapped: 17317888 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:35.832390+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94314496 unmapped: 17317888 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:36.832681+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94314496 unmapped: 17317888 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:37.832922+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94314496 unmapped: 17317888 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:38.833125+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94314496 unmapped: 17317888 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:39.833348+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94314496 unmapped: 17317888 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:40.833492+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94314496 unmapped: 17317888 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:41.833700+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94314496 unmapped: 17317888 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:42.833912+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94314496 unmapped: 17317888 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:43.834055+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94339072 unmapped: 17293312 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:44.834193+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94339072 unmapped: 17293312 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:45.834337+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94339072 unmapped: 17293312 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:46.834443+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94339072 unmapped: 17293312 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:47.834620+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94339072 unmapped: 17293312 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:48.834786+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94339072 unmapped: 17293312 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:49.834911+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94339072 unmapped: 17293312 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:50.835075+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94339072 unmapped: 17293312 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:51.835246+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94339072 unmapped: 17293312 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:52.835419+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94339072 unmapped: 17293312 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:53.835732+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94339072 unmapped: 17293312 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:54.835811+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94347264 unmapped: 17285120 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:55.835969+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94347264 unmapped: 17285120 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:56.836110+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94347264 unmapped: 17285120 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:57.836243+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94347264 unmapped: 17285120 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:58.836395+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94347264 unmapped: 17285120 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:59.842730+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94347264 unmapped: 17285120 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:00.842859+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94347264 unmapped: 17285120 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:01.842974+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94347264 unmapped: 17285120 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:02.843076+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94347264 unmapped: 17285120 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:03.843215+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94363648 unmapped: 17268736 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:04.843370+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 17260544 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:05.843543+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 17260544 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:06.843733+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 17260544 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:07.843862+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 17260544 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:08.844001+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 17260544 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:09.844152+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 17260544 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:10.844307+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 17260544 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:11.844475+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 17260544 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:12.844618+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 17260544 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:13.844768+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 17260544 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:14.844970+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 17260544 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:15.845158+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 17260544 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:16.845302+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 17260544 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:17.845422+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 17260544 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:18.845523+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 17260544 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:19.845707+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 17260544 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:20.845836+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 17260544 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:21.846009+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 17252352 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:22.846160+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 17252352 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:23.846324+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 17235968 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:24.846465+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 17235968 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:25.846666+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 17235968 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:26.846841+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 17235968 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:27.847013+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 17235968 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:28.847143+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 17235968 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:29.847324+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 17235968 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:30.847492+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 17235968 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:31.847638+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 17235968 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:32.847807+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 17235968 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:33.847928+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 17235968 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:34.848058+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 17227776 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:35.848221+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 17227776 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:36.848387+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 17227776 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:37.848488+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 17227776 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:38.848662+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 17227776 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:39.848950+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 17227776 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:40.849112+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 17227776 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:41.849322+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 17227776 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:42.849485+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 17227776 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:43.849650+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:44.849814+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:45.850034+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:46.850228+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:47.850382+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 17211392 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:48.850524+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 17203200 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:49.850619+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 17203200 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:50.850759+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 17203200 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:51.850906+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 17203200 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:52.851092+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 17203200 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:53.851278+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 17203200 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:54.851428+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 17203200 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:55.851651+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 17203200 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:56.851888+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 17203200 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:57.852053+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 17203200 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:58.852227+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 17203200 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:59.852377+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 17203200 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:00.852621+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 17203200 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:01.852807+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 17203200 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:02.852971+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 17203200 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:03.853137+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 17186816 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:04.853361+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 17186816 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:05.853693+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 17186816 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:06.853831+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 17186816 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:07.853980+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 17186816 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:08.854158+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 17186816 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:09.854304+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 17186816 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:10.854447+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 17186816 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:11.854639+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 17186816 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:12.854835+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 17186816 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:13.854984+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 17186816 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:14.855130+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 17186816 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:15.855353+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 17186816 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:16.855471+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:17.855614+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:18.858515+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:19.858741+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:20.859630+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:21.860822+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:22.861246+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:23.861398+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 17178624 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:24.861616+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:25.861792+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:26.861933+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:27.862109+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:28.862454+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:29.862633+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:30.862780+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:31.863092+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:32.863238+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:33.863635+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:34.863845+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:35.864132+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:36.864419+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:37.864637+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:38.864865+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:39.865098+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:40.865301+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:41.865523+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 17162240 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:42.865744+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:43.865941+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 17154048 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:44.866098+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:45.866396+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:46.866546+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:47.866743+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:48.866946+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:49.867108+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:50.867261+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:51.867458+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:52.867635+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:53.867828+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:54.867989+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:55.868181+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:56.868315+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:57.868515+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:58.868665+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:59.868813+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:00.868997+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 9033 writes, 33K keys, 9033 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9033 writes, 2107 syncs, 4.29 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1584 writes, 3841 keys, 1584 commit groups, 1.0 writes per commit group, ingest: 2.18 MB, 0.00 MB/s
                                           Interval WAL: 1585 writes, 496 syncs, 3.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:01.869123+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:02.869338+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:03.869487+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 17137664 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:04.869667+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 17121280 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:05.869845+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 17121280 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:06.870008+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 17121280 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:07.870161+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 17121280 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:08.870299+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 17121280 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:09.870444+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 17121280 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:10.870639+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94519296 unmapped: 17113088 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:11.870757+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94519296 unmapped: 17113088 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:12.870900+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 17104896 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:13.871040+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 17104896 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:14.871178+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 17104896 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:15.871424+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 17104896 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:16.871608+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 17104896 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:17.871767+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 17104896 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:18.871915+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 17104896 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:19.872028+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 17104896 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:20.872142+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 17104896 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:21.872319+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 17104896 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:22.872438+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 17104896 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:23.872641+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 17104896 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:24.872750+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:25.872935+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:26.873062+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:27.873234+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:28.873415+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:29.873576+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:30.873750+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:31.873897+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:32.874032+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:33.874207+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:34.874352+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:35.875081+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:36.875230+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:37.875358+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:38.875502+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:39.875618+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:40.875745+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:41.875907+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:42.876073+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:43.876242+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 17088512 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:44.876444+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94560256 unmapped: 17072128 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:45.876728+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94560256 unmapped: 17072128 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:46.876904+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94560256 unmapped: 17072128 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:47.877131+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94560256 unmapped: 17072128 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:48.877330+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94560256 unmapped: 17072128 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:49.877677+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94560256 unmapped: 17072128 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:50.877869+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94560256 unmapped: 17072128 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:51.878170+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94560256 unmapped: 17072128 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:52.878445+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94560256 unmapped: 17072128 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:53.878670+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94560256 unmapped: 17072128 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:54.878852+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94568448 unmapped: 17063936 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:55.879140+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94568448 unmapped: 17063936 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:56.879321+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94568448 unmapped: 17063936 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:57.879530+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94576640 unmapped: 17055744 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:58.879840+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94576640 unmapped: 17055744 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:59.880087+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94576640 unmapped: 17055744 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:00.880313+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94576640 unmapped: 17055744 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:01.880541+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94576640 unmapped: 17055744 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:02.880774+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94576640 unmapped: 17055744 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:03.880956+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 333.320861816s of 333.336303711s, submitted: 157
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94617600 unmapped: 17014784 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:04.881188+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94633984 unmapped: 16998400 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:05.881455+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94625792 unmapped: 17006592 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:06.881674+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94642176 unmapped: 16990208 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:07.881921+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94642176 unmapped: 16990208 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:08.882109+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94642176 unmapped: 16990208 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:09.882359+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 16982016 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:10.882542+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 16982016 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:11.882741+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 16982016 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:12.882907+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 16982016 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:13.883090+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 16982016 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:14.883239+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 16982016 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:15.883409+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 16973824 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:16.883598+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 16973824 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:17.883739+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 16973824 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:18.883929+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 16973824 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:19.884109+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 16973824 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:20.884273+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 16973824 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:21.884421+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 16973824 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:22.884630+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 16973824 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:23.884885+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 16973824 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:24.885065+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:25.885241+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:26.885378+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:27.885536+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:28.885713+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:29.885857+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:30.885993+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:31.886160+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:32.886339+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:33.886471+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:34.886610+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:35.886799+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:36.887005+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:37.887177+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:38.887311+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:39.887462+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:40.887612+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:41.887786+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:42.887929+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:43.888111+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:44.888267+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:45.888451+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:46.888616+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:47.888790+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:48.888968+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:49.889096+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:50.889242+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:51.889395+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:52.889543+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:53.889696+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:54.889820+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:55.890336+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:56.890510+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:57.890740+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:58.890927+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:59.891067+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:00.891344+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:01.891750+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:02.892670+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:03.893475+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:04.893644+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:05.893923+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:06.894177+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:07.894766+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:08.895257+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:09.895809+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:10.895985+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:11.896205+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:12.896498+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:13.896823+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:14.897108+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:15.897418+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:16.897715+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:17.897909+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:18.898058+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:19.898266+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:20.898480+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:21.906998+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:22.907229+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:23.907412+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:24.907548+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:25.907793+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:26.907942+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:27.908109+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:28.908291+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:29.908527+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:30.908744+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:31.909055+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:32.909231+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:33.909375+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:34.909614+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:35.909818+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:36.910002+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:37.910137+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:38.910354+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:39.910537+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:40.910782+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:41.911083+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:42.911234+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:43.911413+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:44.911626+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:45.911808+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:46.912041+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:47.912202+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:48.912383+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:49.912613+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:50.912788+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:51.913047+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:52.913231+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:53.913390+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:54.913517+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:55.913701+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:56.913857+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:57.914009+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:58.914262+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:59.914421+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:00.914690+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:01.915013+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:02.915154+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:03.915296+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:04.915467+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:05.915687+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:06.915821+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:07.915957+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:08.916156+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:09.916291+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:10.916418+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:11.916547+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:12.916712+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:13.916835+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:14.916954+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:15.917122+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:16.917258+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:17.917367+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:18.917495+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:19.917630+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:20.917820+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:21.918633+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:22.918772+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:23.918953+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:24.919111+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:25.919261+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:26.919430+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:27.919634+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:28.919756+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:29.919889+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:30.920046+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:31.920203+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:32.920397+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 16965632 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:33.920532+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:34.920670+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:35.920862+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:36.921066+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:37.921174+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:38.921363+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:39.921639+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:40.921785+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:41.922091+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:42.922268+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:43.922518+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:44.922713+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:45.922956+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:46.923146+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:47.923325+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:48.923490+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:49.923637+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:50.923984+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:51.924111+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:52.924262+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:53.924379+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:54.924488+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:55.924738+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:56.924911+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:57.925029+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:58.925153+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:59.925285+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:00.925409+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 16957440 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:01.925686+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:02.925871+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:03.925998+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:04.926179+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:05.926331+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:06.926532+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:07.926620+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:08.926762+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:09.926911+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:10.927039+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:11.927165+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:12.927314+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:13.927480+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:14.927616+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:15.927779+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:16.927921+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:17.928084+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:18.928240+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:19.928402+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:20.928525+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:21.928659+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:22.928802+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:23.928983+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:24.929126+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:25.929394+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:26.929536+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:27.929713+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:28.929872+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:29.930025+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:30.930220+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:31.930402+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:32.930820+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:33.930996+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:34.931221+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:35.931469+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 16949248 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:36.931607+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94691328 unmapped: 16941056 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:37.931820+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94691328 unmapped: 16941056 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:38.931963+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94691328 unmapped: 16941056 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:39.932103+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94691328 unmapped: 16941056 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:40.932240+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94691328 unmapped: 16941056 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:41.932383+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94699520 unmapped: 16932864 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:42.936630+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94699520 unmapped: 16932864 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:43.936814+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94699520 unmapped: 16932864 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:44.936972+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94699520 unmapped: 16932864 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:45.937168+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94699520 unmapped: 16932864 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:46.937366+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94699520 unmapped: 16932864 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:47.937660+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94699520 unmapped: 16932864 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:48.937875+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94699520 unmapped: 16932864 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:49.938078+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94699520 unmapped: 16932864 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:50.938190+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94699520 unmapped: 16932864 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:51.938385+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94699520 unmapped: 16932864 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:52.938524+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94699520 unmapped: 16932864 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:53.938698+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94699520 unmapped: 16932864 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:54.938833+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94699520 unmapped: 16932864 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:55.939033+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94699520 unmapped: 16932864 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:56.939201+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94699520 unmapped: 16932864 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:57.939334+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 16924672 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:58.939473+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 16924672 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:59.939610+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 16924672 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:00.939738+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 16924672 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'config diff' '{prefix=config diff}'
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'config show' '{prefix=config show}'
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'counter dump' '{prefix=counter dump}'
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'counter schema' '{prefix=counter schema}'
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:33 compute-0 ceph-osd[88072]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:33 compute-0 ceph-osd[88072]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224758 data_alloc: 218103808 data_used: 5786
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets getting new tickets!
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:01.939980+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _finish_auth 0
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:01.940658+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94593024 unmapped: 17039360 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: tick
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_tickets
Jan 23 19:20:33 compute-0 ceph-osd[88072]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:02.940134+0000)
Jan 23 19:20:33 compute-0 ceph-osd[88072]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x1503858/0x15f5000, compress 0x0/0x0/0x0, omap 0x17eec, meta 0x3d58114), peers [0,1] op hist [])
Jan 23 19:20:33 compute-0 ceph-osd[88072]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 16891904 heap: 111632384 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:33 compute-0 ceph-osd[88072]: do_command 'log dump' '{prefix=log dump}'
Jan 23 19:20:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 23 19:20:34 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1144084261' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 23 19:20:34 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4204281759' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 23 19:20:34 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1799015010' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 23 19:20:34 compute-0 ceph-mon[75097]: from='client.14866 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:34 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4001095600' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 23 19:20:34 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14874 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 23 19:20:34 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2656620958' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 23 19:20:34 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14878 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:34 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} v 0)
Jan 23 19:20:34 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} : dispatch
Jan 23 19:20:35 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:35 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14882 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 23 19:20:35 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3125988379' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 23 19:20:35 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} v 0)
Jan 23 19:20:35 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} : dispatch
Jan 23 19:20:35 compute-0 ceph-mon[75097]: pgmap v1537: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:35 compute-0 ceph-mon[75097]: from='client.14872 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:35 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1144084261' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 23 19:20:35 compute-0 ceph-mon[75097]: from='client.14874 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:35 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2656620958' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 23 19:20:35 compute-0 ceph-mon[75097]: from='client.14878 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:35 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} : dispatch
Jan 23 19:20:36 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14884 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 23 19:20:36 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1520569982' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 23 19:20:36 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14888 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:36 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 23 19:20:36 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3528365489' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 23 19:20:37 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:37 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14892 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:37 compute-0 crontab[268894]: (root) LIST (root)
Jan 23 19:20:37 compute-0 ceph-mon[75097]: pgmap v1538: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:37 compute-0 ceph-mon[75097]: from='client.14882 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:37 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3125988379' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 23 19:20:37 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} : dispatch
Jan 23 19:20:37 compute-0 ceph-mon[75097]: from='client.14884 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:37 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1520569982' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 23 19:20:37 compute-0 ceph-mon[75097]: from='client.14888 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:37 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3528365489' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 23 19:20:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 23 19:20:37 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2433855863' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 23 19:20:37 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14896 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:20:37 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 23 19:20:37 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/847420488' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 23 19:20:37 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14900 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:38 compute-0 ceph-mon[75097]: pgmap v1539: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:38 compute-0 ceph-mon[75097]: from='client.14892 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:38 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2433855863' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 23 19:20:38 compute-0 ceph-mon[75097]: from='client.14896 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:38 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/847420488' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 23 19:20:38 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14904 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:17.982262+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:18.982402+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:19.982637+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:20.982891+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:21.983044+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:22.987870+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:23.988001+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1155072 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:24.988232+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:25.988386+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:26.988548+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:27.988737+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:28.988882+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:29.989055+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:30.989288+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:31.989627+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:32.989924+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:33.990113+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:34.990304+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:35.990798+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:36.991080+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:37.991377+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:38.991524+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:39.991775+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:40.992173+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:41.992320+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:42.992487+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:43.992664+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:44.992777+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:45.992914+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:46.993044+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:47.993164+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:48.993291+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:49.993440+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:50.993620+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:51.993756+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:52.993929+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:53.994344+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:54.994486+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:55.994624+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:56.994777+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:57.994914+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:58.995078+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:59.995265+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 1146880 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:00.995423+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:01.995619+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:02.995760+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:03.995924+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:04.996047+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:05.996176+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:06.996295+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:07.996437+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:08.996621+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:09.996753+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:10.996916+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:11.997080+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:12.997291+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:13.997527+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:14.997762+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:15.997953+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:16.998353+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:17.998598+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:18.998814+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:19.998989+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:21.000250+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:22.000393+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:23.000523+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:24.000731+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:25.000891+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 1122304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:26.001103+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:27.001320+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:28.001520+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:29.001653+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:30.001790+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:31.002058+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:32.002402+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:33.002633+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:34.002806+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:35.003003+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:36.003203+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:37.003354+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:38.003534+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:39.003772+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:40.003954+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:41.004114+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:42.004257+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:43.004437+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 1114112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:44.004625+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:45.004782+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:46.004916+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:47.005077+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:48.005237+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:49.005342+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:50.005472+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:51.005726+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:52.005886+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:53.006046+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:54.006634+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:55.006857+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:56.006996+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:57.007144+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:58.007297+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:59.007481+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:00.007676+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:01.007810+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:02.007955+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:03.008119+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:04.008286+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:05.008514+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 1105920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:06.008638+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:07.008860+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:08.009077+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:09.009226+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:10.009380+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:11.009547+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:12.009714+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:13.009830+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:14.009954+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:15.010221+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:16.010381+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:17.010533+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:18.010697+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:19.010851+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:20.010975+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:21.011256+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:22.011388+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:23.011539+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1097728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:24.011771+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:25.011901+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:26.012032+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:27.012190+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:28.012372+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:29.012495+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:30.012623+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:31.012786+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:32.012906+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:33.013030+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:34.013192+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 1089536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:35.013320+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 1081344 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:36.013438+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 1081344 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:37.013643+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 1081344 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:38.013786+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 1081344 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:39.014134+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 1081344 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:40.014273+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 1081344 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:41.014442+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:42.014666+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:43.014828+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:44.014972+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:45.015155+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:46.015552+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:47.015783+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:48.015967+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:49.016264+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:50.016425+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:51.016642+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:52.016788+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:53.016920+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:54.017132+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:55.017298+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:56.017462+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:57.017689+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:58.017889+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:59.018133+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:00.018359+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 1073152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:01.018506+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 1064960 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:02.018676+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 1064960 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:03.018884+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 1064960 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc ms_handle_reset ms_handle_reset con 0x55944a35e000
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4161918394
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: get_auth_request con 0x55944a35f400 auth_method 0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_configure stats_period=5
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:04.019080+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 ms_handle_reset con 0x55944adaa000 session 0x55944ae53180
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944adaa800
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 ms_handle_reset con 0x55944adab800 session 0x55944ae52c40
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944a48a000
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:05.019254+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:06.019413+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:07.019584+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:08.019785+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:09.019913+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:10.020054+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:11.020241+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:12.020375+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:13.020510+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:14.020678+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:15.020877+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:16.020985+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:17.021105+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:18.021248+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:19.021451+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:20.021654+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:21.021857+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:22.022042+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:23.022176+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:24.022383+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:25.022519+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:26.022708+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:27.022972+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:28.023099+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:29.023233+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:30.023368+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:31.023537+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:32.023641+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:33.023780+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:34.023901+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:35.024036+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:36.024153+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:37.024304+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:38.024432+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:39.024627+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:40.024787+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:41.024932+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:42.025070+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:43.025256+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:44.025546+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:45.025700+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:46.026057+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:47.026193+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:48.026342+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:49.026484+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:50.026602+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:51.026767+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:52.026896+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:53.027043+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:54.027164+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:55.027325+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:56.027494+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:57.027627+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:58.027803+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:59.027968+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:00.028457+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:01.028659+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 860160 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:02.028999+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 860160 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:03.029152+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 860160 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 300.103851318s of 300.344635010s, submitted: 90
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:04.029375+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:05.029527+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:06.029896+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:07.030280+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:08.030465+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:09.030603+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:10.030743+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:11.030883+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:12.031026+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:13.031150+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:14.031279+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:15.031401+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:16.031536+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:17.031692+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:18.031908+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:19.032049+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:20.032204+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:21.032346+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:22.032595+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:23.032759+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:24.032894+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:25.033034+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:26.033194+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:27.033343+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:28.033497+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:29.033641+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:30.033746+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:31.033885+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:32.034022+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:33.034169+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:34.034307+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:35.034436+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:36.034552+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:37.034702+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:38.034856+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:39.034993+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:40.035120+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:41.035269+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:42.035736+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:43.035920+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:44.036088+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:45.036209+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:46.036345+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:47.036487+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:48.036696+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:49.036816+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:50.036924+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:51.037091+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:52.037258+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:53.037397+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:54.037590+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:55.037698+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:56.037832+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:57.037968+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:58.038093+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:59.038227+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:00.038375+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:01.038595+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:02.038822+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:03.038961+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:04.039201+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:05.039349+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:06.039509+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:07.039672+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:08.039826+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:09.039962+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:10.040080+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:11.040248+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:12.040442+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:13.040643+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:14.040768+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:15.040913+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:16.041091+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:17.041255+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:18.041395+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:19.041546+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:20.041733+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:21.041939+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:22.042066+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:23.042226+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:24.042356+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:25.042476+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:26.042625+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:27.042810+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:28.043096+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:29.043309+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:30.043470+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:31.043710+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:32.043839+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:33.043969+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:34.044166+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:35.044314+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:36.044445+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:37.044610+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:38.044786+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:39.045029+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:40.045144+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:41.045290+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:42.045420+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:43.045579+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:44.045709+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:45.045896+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:46.046102+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:47.046340+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:48.046503+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:49.046672+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:50.046804+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:51.047044+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:52.047177+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:53.047302+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:54.047432+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:55.047604+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:56.047707+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:57.047826+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:58.047952+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:59.048053+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:00.048936+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:01.049241+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:02.049642+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:03.049974+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:04.050195+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:05.050321+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:06.050495+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:07.050653+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:08.050798+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:09.050936+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:10.051096+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:11.051368+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:12.051511+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:13.051689+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:14.051927+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:15.052063+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:16.052208+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:17.052409+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:18.052613+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:19.052831+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:20.052997+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:21.053156+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:22.053381+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:23.053588+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:24.053776+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:25.053952+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:26.054141+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:27.054360+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:28.054589+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:29.054796+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:30.055021+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:31.055213+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:32.055391+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:33.055588+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:34.055735+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:35.055840+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:36.056102+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:37.056228+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:38.056428+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:39.056685+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:40.056884+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:41.057099+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:42.057403+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:43.057631+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:44.057831+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:45.058025+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:46.058306+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:47.058518+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:48.058646+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:49.058786+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:50.058996+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:51.059235+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:52.059501+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:53.059730+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:54.059862+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:55.060020+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:56.060234+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:57.060374+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:58.060495+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:59.060631+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:00.060754+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:01.060932+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:02.061170+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:03.061394+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:04.061639+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:05.061798+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:06.061958+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:07.062118+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:08.062265+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:09.062392+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:10.062459+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:11.062621+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:12.062903+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:13.063102+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread fragmentation_score=0.000127 took=0.000037s
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:14.063322+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:15.063469+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:16.063750+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:17.063939+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:18.064062+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:19.064323+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:20.064502+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:21.064702+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:22.064877+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:23.065047+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:24.065196+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:25.065335+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:26.065540+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:27.065751+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:28.065935+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:29.066620+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:30.067312+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:31.067605+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:32.068304+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:33.068448+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 753664 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:34.068999+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 753664 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:35.069429+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 753664 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:36.069888+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 753664 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:37.070161+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:38.070407+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 753664 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:39.070629+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 753664 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:40.070950+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:41.071299+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:42.071655+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:43.071831+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:44.072087+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:45.072335+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:46.072667+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:47.072826+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:48.073066+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:49.073420+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:50.073638+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:51.073899+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:52.074098+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:53.074304+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:54.074436+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:55.074575+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:56.074836+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 7030 writes, 29K keys, 7030 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7030 writes, 1364 syncs, 5.15 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b27a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b27a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b27a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559448b278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:57.075103+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:58.075504+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:59.075844+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:00.075987+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:01.076186+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:02.076320+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:03.076464+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:04.076605+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:05.076861+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:06.076990+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:07.077117+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:08.077299+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:09.077479+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:10.077647+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:11.077859+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:12.078052+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:13.078253+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:14.078396+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:15.078548+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:16.078760+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:17.079006+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:18.079241+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:19.079450+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:20.079662+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:21.079936+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:22.080136+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:23.080302+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:24.080470+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:25.080687+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:26.080857+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:27.081064+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:28.081293+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:29.081604+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:30.081819+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:31.082050+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:32.082246+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:33.082425+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:34.082608+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:35.082766+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:36.082965+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:37.083255+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:38.083469+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:39.083640+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:40.083852+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:41.084076+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:42.084267+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:43.084447+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:44.084679+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:45.084845+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:46.085142+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:47.085346+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:48.085506+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:49.085709+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:50.085895+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:51.086124+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:52.086340+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:53.086527+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:54.086720+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:55.086927+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:56.087147+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:57.087352+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:58.087535+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:59.087842+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:00.088127+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:01.088345+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:02.088502+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:03.088694+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.941345215s of 299.980346680s, submitted: 22
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:04.088898+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 704512 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:05.089029+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:06.089196+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:07.089400+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:08.089624+0000)
Jan 23 19:20:38 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2371314364' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:09.089824+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:10.089986+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:11.090206+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:12.090366+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:13.090551+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:14.090747+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:15.090905+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:16.091056+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:17.091239+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:18.091453+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:19.091641+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:20.091823+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:21.092062+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:22.092315+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:23.092690+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:24.092911+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:25.093088+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:26.093279+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:27.093512+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:28.093641+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:29.093760+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:30.093882+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:31.094028+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:32.094306+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:33.094709+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:34.094901+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:35.095054+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:36.095352+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:37.095627+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:38.095791+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:39.096019+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:40.096196+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:41.096443+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:42.096640+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:43.096728+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:44.096896+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:45.097057+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:46.097244+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:47.097395+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:48.097547+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:49.097734+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:50.097873+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:51.098051+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:52.098186+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:53.098346+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:54.098493+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:55.098617+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:56.098750+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:57.098930+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:58.099060+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:59.099202+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:00.099364+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 671744 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:01.099591+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:02.099747+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:03.099894+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:04.100033+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:05.100189+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:06.100353+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:07.100498+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:08.100671+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:09.100833+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:10.101031+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:11.101236+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:12.101374+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:13.101516+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:14.101670+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:15.101803+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:16.101894+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:17.102013+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:18.102172+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:19.102385+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:20.102528+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:21.102740+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:22.102885+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:23.103014+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:24.103155+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:25.103301+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:26.103410+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:27.103587+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:28.103732+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:29.103866+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:30.104032+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:31.104268+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:32.104405+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:33.104620+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:34.104804+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:35.104974+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:36.105115+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:37.105261+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:38.105394+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:39.105515+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 663552 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:40.105649+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:41.105847+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:42.106010+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:43.106166+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:44.106324+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:45.106643+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:46.106869+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:47.107085+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:48.107264+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:49.107456+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:50.107648+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:51.107864+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:52.108041+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 655360 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:53.108203+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 647168 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:54.108408+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 647168 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:55.108611+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 647168 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:56.108773+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 647168 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:57.108924+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 647168 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:58.109098+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 647168 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:59.109320+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 647168 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:00.109468+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 647168 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:01.109661+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 638976 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:02.109807+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 638976 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:03.109972+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 638976 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:04.110403+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 638976 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:05.110740+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 638976 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:06.110875+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 638976 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:07.111003+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 638976 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:08.111124+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 638976 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:09.111312+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:10.111459+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:11.111675+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:12.111854+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:13.112005+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:14.112278+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:15.112634+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:16.112896+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:17.113231+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:18.113461+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:19.113631+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:20.170833+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:21.170988+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:22.171105+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:23.171217+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:24.171334+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:25.171469+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:26.171625+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:27.171797+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:28.171908+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:29.172186+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:30.172325+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:31.172676+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:32.172828+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:33.173047+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:34.173195+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:35.173341+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:36.173460+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:37.173621+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:38.173807+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:39.174011+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:40.174157+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 630784 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:41.174394+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 622592 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:42.174527+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 622592 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:43.174700+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:44.174818+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:45.174986+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:46.175137+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:47.175354+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:48.175497+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:49.175711+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:50.175931+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:51.176127+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:52.176289+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:53.176485+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:54.176669+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:55.176817+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:56.176984+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987397 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:57.177204+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:58.177369+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce4f000/0x0/0x4ffc00000, data 0x123309/0x1dd000, compress 0x0/0x0/0x0, omap 0x12a34, meta 0x2bbd5cc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 614400 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:59.177516+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944adab800
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 175.401321411s of 176.169799805s, submitted: 90
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 581632 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:00.177814+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 516096 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:01.177956+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064240 data_alloc: 218103808 data_used: 11969
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 17137664 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:02.178126+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fc1d4000/0x0/0x4ffc00000, data 0xd96adb/0xe56000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 121 ms_handle_reset con 0x55944adab800 session 0x55944d579dc0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 17145856 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:03.178342+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944b2cb000
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 25403392 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:04.178629+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 122 ms_handle_reset con 0x55944b2cb000 session 0x55944cdd2540
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:05.178793+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:06.178957+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114499 data_alloc: 218103808 data_used: 12554
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:07.179114+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb9cc000/0x0/0x4ffc00000, data 0x159a27e/0x165e000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:08.179305+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:09.179460+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:10.179650+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb9cc000/0x0/0x4ffc00000, data 0x159a27e/0x165e000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.606542587s of 11.321042061s, submitted: 49
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:11.179852+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116825 data_alloc: 218103808 data_used: 12554
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:12.180490+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb9c9000/0x0/0x4ffc00000, data 0x159bcfd/0x1661000, compress 0x0/0x0/0x0, omap 0x13749, meta 0x2bbc8b7), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:13.180707+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:14.180892+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:15.181110+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:16.181305+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116825 data_alloc: 218103808 data_used: 12554
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:17.181629+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:18.181819+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb9c9000/0x0/0x4ffc00000, data 0x159bcfd/0x1661000, compress 0x0/0x0/0x0, omap 0x13749, meta 0x2bbc8b7), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:19.182723+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:20.182870+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:21.183036+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116825 data_alloc: 218103808 data_used: 12554
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:22.183171+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:23.183317+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 25337856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb9c9000/0x0/0x4ffc00000, data 0x159bcfd/0x1661000, compress 0x0/0x0/0x0, omap 0x13749, meta 0x2bbc8b7), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944a35fc00
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.901450157s of 12.907519341s, submitted: 14
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb9c6000/0x0/0x4ffc00000, data 0x15a1420/0x1666000, compress 0x0/0x0/0x0, omap 0x13749, meta 0x2bbc8b7), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:24.183460+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 25018368 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:25.183613+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 86138880 unmapped: 23969792 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Got map version 10
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:26.183851+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 24043520 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117725 data_alloc: 218103808 data_used: 12554
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944d947c00
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:27.184040+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 23543808 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:28.184200+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb9b4000/0x0/0x4ffc00000, data 0x15b1ffe/0x1678000, compress 0x0/0x0/0x0, omap 0x1405b, meta 0x2bbbfa5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 23535616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:29.184316+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 23535616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:30.184458+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 23535616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:31.184614+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 87703552 unmapped: 22405120 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb9b1000/0x0/0x4ffc00000, data 0x15b5564/0x167b000, compress 0x0/0x0/0x0, omap 0x140f1, meta 0x2bbbf0f), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120291 data_alloc: 218103808 data_used: 12554
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:32.184745+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 87703552 unmapped: 22405120 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb9ae000/0x0/0x4ffc00000, data 0x15b89a2/0x167e000, compress 0x0/0x0/0x0, omap 0x141ff, meta 0x2bbbe01), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:33.184883+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 87703552 unmapped: 22405120 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Got map version 11
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:34.185034+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 87703552 unmapped: 22405120 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.349903107s of 10.990716934s, submitted: 44
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:35.185175+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 87957504 unmapped: 22151168 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:36.185379+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 87965696 unmapped: 22142976 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131167 data_alloc: 218103808 data_used: 12554
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:37.185605+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88031232 unmapped: 22077440 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb989000/0x0/0x4ffc00000, data 0x15d6495/0x16a1000, compress 0x0/0x0/0x0, omap 0x14b8d, meta 0x2bbb473), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:38.185736+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 22069248 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:39.185870+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 21864448 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb981000/0x0/0x4ffc00000, data 0x15dfbe0/0x16a9000, compress 0x0/0x0/0x0, omap 0x14c94, meta 0x2bbb36c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:40.186073+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 21815296 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:41.186258+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 21700608 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136811 data_alloc: 218103808 data_used: 12554
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb970000/0x0/0x4ffc00000, data 0x15ef5e6/0x16ba000, compress 0x0/0x0/0x0, omap 0x15310, meta 0x2bbacf0), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:42.186441+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88449024 unmapped: 21659648 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:43.186599+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb96d000/0x0/0x4ffc00000, data 0x15f5696/0x16bf000, compress 0x0/0x0/0x0, omap 0x15524, meta 0x2bbaadc), peers [0,2] op hist [0,1])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88498176 unmapped: 21610496 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:44.186735+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88498176 unmapped: 21610496 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:45.186875+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88498176 unmapped: 21610496 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.973920822s of 11.331918716s, submitted: 109
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:46.194390+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 21577728 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb967000/0x0/0x4ffc00000, data 0x15fb920/0x16c5000, compress 0x0/0x0/0x0, omap 0x157fa, meta 0x2bba806), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135247 data_alloc: 218103808 data_used: 12826
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:47.194600+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 21577728 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:48.194753+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 21577728 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:49.194981+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88563712 unmapped: 21544960 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:50.195119+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 21536768 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:51.195322+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 21446656 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb94c000/0x0/0x4ffc00000, data 0x161671b/0x16e0000, compress 0x0/0x0/0x0, omap 0x15f88, meta 0x2bba078), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138019 data_alloc: 218103808 data_used: 12826
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:52.195498+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88694784 unmapped: 21413888 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:53.195649+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88694784 unmapped: 21413888 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:54.195798+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 88694784 unmapped: 21413888 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:55.196000+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 90832896 unmapped: 19275776 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb946000/0x0/0x4ffc00000, data 0x161cdb2/0x16e6000, compress 0x0/0x0/0x0, omap 0x15fec, meta 0x2bba014), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.473471642s of 10.000333786s, submitted: 42
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:56.196207+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 90857472 unmapped: 19251200 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138637 data_alloc: 218103808 data_used: 12826
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:57.196354+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 90857472 unmapped: 19251200 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:58.196526+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 90857472 unmapped: 19251200 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:59.196693+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 90939392 unmapped: 19169280 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fa798000/0x0/0x4ffc00000, data 0x162b501/0x16f4000, compress 0x0/0x0/0x0, omap 0x16532, meta 0x3d59ace), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:00.196915+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 90939392 unmapped: 19169280 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:01.197087+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91004928 unmapped: 19103744 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139877 data_alloc: 218103808 data_used: 12826
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:02.197232+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91095040 unmapped: 19013632 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:03.197372+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91095040 unmapped: 19013632 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0x1645296/0x1711000, compress 0x0/0x0/0x0, omap 0x16997, meta 0x3d59669), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:04.197535+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 18931712 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0x1646d55/0x1711000, compress 0x0/0x0/0x0, omap 0x16ba4, meta 0x3d5945c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:05.197711+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0x1646d55/0x1711000, compress 0x0/0x0/0x0, omap 0x16ba4, meta 0x3d5945c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 18931712 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.903821945s of 10.000028610s, submitted: 45
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:06.197864+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0x1646d55/0x1711000, compress 0x0/0x0/0x0, omap 0x16c24, meta 0x3d593dc), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91250688 unmapped: 18857984 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146959 data_alloc: 218103808 data_used: 12826
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:07.198032+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91258880 unmapped: 18849792 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:08.198196+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91267072 unmapped: 18841600 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:09.198412+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fa768000/0x0/0x4ffc00000, data 0x165810a/0x1722000, compress 0x0/0x0/0x0, omap 0x171b7, meta 0x3d58e49), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91267072 unmapped: 18841600 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fa768000/0x0/0x4ffc00000, data 0x165810a/0x1722000, compress 0x0/0x0/0x0, omap 0x1724b, meta 0x3d58db5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:10.198622+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91267072 unmapped: 18841600 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:11.198800+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 18784256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149511 data_alloc: 218103808 data_used: 12826
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:12.198998+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 18784256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:13.199168+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa75a000/0x0/0x4ffc00000, data 0x1666adf/0x1732000, compress 0x0/0x0/0x0, omap 0x175a3, meta 0x3d58a5d), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 18784256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:14.199293+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa757000/0x0/0x4ffc00000, data 0x1668d5d/0x1735000, compress 0x0/0x0/0x0, omap 0x177d0, meta 0x3d58830), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 18784256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:15.199478+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 18784256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.496050358s of 10.000307083s, submitted: 69
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:16.199619+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 18784256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149771 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:17.199760+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 18784256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:18.199908+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 18784256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:19.200040+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 18784256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa750000/0x0/0x4ffc00000, data 0x166f91b/0x173c000, compress 0x0/0x0/0x0, omap 0x178ae, meta 0x3d58752), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:20.200170+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91332608 unmapped: 18776064 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:21.200302+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91332608 unmapped: 18776064 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151271 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:22.200504+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91332608 unmapped: 18776064 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:23.200734+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91332608 unmapped: 18776064 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:24.201057+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91332608 unmapped: 18776064 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa740000/0x0/0x4ffc00000, data 0x167f54d/0x174c000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:25.201381+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 18751488 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.725258827s of 10.000419617s, submitted: 18
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:26.201590+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 18751488 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150591 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:27.201939+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 18751488 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa740000/0x0/0x4ffc00000, data 0x167f5a1/0x174c000, compress 0x0/0x0/0x0, omap 0x179d6, meta 0x3d5862a), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:28.202217+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 18751488 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:29.204034+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91381760 unmapped: 18726912 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:30.204321+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91381760 unmapped: 18726912 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:31.204473+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91430912 unmapped: 18677760 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:32.204633+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154717 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91439104 unmapped: 18669568 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa72c000/0x0/0x4ffc00000, data 0x1693bc1/0x1760000, compress 0x0/0x0/0x0, omap 0x17cf5, meta 0x3d5830b), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:33.204792+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91439104 unmapped: 18669568 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:34.204923+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa72c000/0x0/0x4ffc00000, data 0x1693bc1/0x1760000, compress 0x0/0x0/0x0, omap 0x17d8b, meta 0x3d58275), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91512832 unmapped: 18595840 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:35.205126+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 91512832 unmapped: 18595840 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.611222267s of 10.000637054s, submitted: 43
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:36.205272+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 92864512 unmapped: 17244160 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:37.205495+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1164941 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94019584 unmapped: 16089088 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:38.205622+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 15851520 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:39.205763+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 15654912 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fa6d2000/0x0/0x4ffc00000, data 0x16ea838/0x17ba000, compress 0x0/0x0/0x0, omap 0x18d4b, meta 0x3d572b5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:40.205911+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 15687680 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:41.206115+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94437376 unmapped: 15671296 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:42.206303+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167829 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fa6cf000/0x0/0x4ffc00000, data 0x16efc1f/0x17bd000, compress 0x0/0x0/0x0, omap 0x18ec7, meta 0x3d57139), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94437376 unmapped: 15671296 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:43.206498+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 15695872 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:44.206628+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 15663104 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:45.206820+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 15663104 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.430392265s of 10.096447945s, submitted: 120
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:46.206948+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 129 handle_osd_map epochs [130,131], i have 129, src has [1,131]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 15572992 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:47.207083+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177921 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94633984 unmapped: 15474688 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:48.207297+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fa697000/0x0/0x4ffc00000, data 0x1722f69/0x17f3000, compress 0x0/0x0/0x0, omap 0x199d9, meta 0x3d56627), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94633984 unmapped: 15474688 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:49.207489+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 15622144 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:50.207669+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94625792 unmapped: 15482880 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:51.207845+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94633984 unmapped: 15474688 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:52.208098+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179695 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 15360000 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa66c000/0x0/0x4ffc00000, data 0x174ae6e/0x181c000, compress 0x0/0x0/0x0, omap 0x1a727, meta 0x3d558d9), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:53.208289+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 15360000 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa66c000/0x0/0x4ffc00000, data 0x174ae6e/0x181c000, compress 0x0/0x0/0x0, omap 0x1a727, meta 0x3d558d9), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:54.208431+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95895552 unmapped: 14213120 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:55.208655+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95748096 unmapped: 14360576 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:56.208858+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.514696121s of 10.278063774s, submitted: 174
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 14737408 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:57.209022+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181893 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 14737408 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:58.209180+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 14680064 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:59.209965+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa641000/0x0/0x4ffc00000, data 0x1779201/0x184b000, compress 0x0/0x0/0x0, omap 0x1ac7f, meta 0x3d55381), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95526912 unmapped: 14581760 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:00.210171+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95543296 unmapped: 14565376 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:01.210350+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 14548992 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:02.210488+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187163 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 14548992 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:03.210636+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa62e000/0x0/0x4ffc00000, data 0x1788e64/0x185c000, compress 0x0/0x0/0x0, omap 0x1b138, meta 0x3d54ec8), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 14548992 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa62a000/0x0/0x4ffc00000, data 0x178f3e1/0x1862000, compress 0x0/0x0/0x0, omap 0x1b138, meta 0x3d54ec8), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:04.210787+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 14327808 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:05.210990+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 14327808 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:06.211164+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95854592 unmapped: 14254080 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa623000/0x0/0x4ffc00000, data 0x1795be9/0x1869000, compress 0x0/0x0/0x0, omap 0x1b1ce, meta 0x3d54e32), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:07.211362+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185971 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95911936 unmapped: 14196736 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:08.211516+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95911936 unmapped: 14196736 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:09.211688+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.706715584s of 13.003004074s, submitted: 34
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 14163968 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:10.211886+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 14163968 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:11.212131+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 14319616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:12.212335+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186251 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 14319616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa61a000/0x0/0x4ffc00000, data 0x179eca7/0x1872000, compress 0x0/0x0/0x0, omap 0x1b264, meta 0x3d54d9c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:13.212525+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa61a000/0x0/0x4ffc00000, data 0x179eca7/0x1872000, compress 0x0/0x0/0x0, omap 0x1b264, meta 0x3d54d9c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 14319616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:14.212669+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95903744 unmapped: 14204928 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:15.212926+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95903744 unmapped: 14204928 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:16.213100+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95903744 unmapped: 14204928 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:17.213253+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186451 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95903744 unmapped: 14204928 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:18.213444+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95911936 unmapped: 14196736 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:19.213595+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa5fe000/0x0/0x4ffc00000, data 0x17bb0df/0x188e000, compress 0x0/0x0/0x0, omap 0x1b6a2, meta 0x3d5495e), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.553693771s of 10.001129150s, submitted: 18
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 95911936 unmapped: 14196736 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:20.213792+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 96149504 unmapped: 13959168 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:21.213983+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 96313344 unmapped: 13795328 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:22.214149+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198723 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 12525568 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa5bb000/0x0/0x4ffc00000, data 0x17fbe9f/0x18d1000, compress 0x0/0x0/0x0, omap 0x1bc46, meta 0x3d543ba), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:23.214269+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12419072 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:24.214457+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa58d000/0x0/0x4ffc00000, data 0x1827d0d/0x18fe000, compress 0x0/0x0/0x0, omap 0x1c152, meta 0x3d53eae), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97599488 unmapped: 12509184 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:25.214620+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97542144 unmapped: 12566528 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:26.214775+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12443648 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa53c000/0x0/0x4ffc00000, data 0x1878476/0x1950000, compress 0x0/0x0/0x0, omap 0x1c5c6, meta 0x3d53a3a), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:27.214947+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214687 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12230656 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:28.215092+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa50e000/0x0/0x4ffc00000, data 0x18a3c15/0x197b000, compress 0x0/0x0/0x0, omap 0x1cc13, meta 0x3d533ed), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97296384 unmapped: 12812288 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:29.215259+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.288331032s of 10.002779007s, submitted: 127
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 12640256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:30.215457+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 12640256 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:31.216219+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa4ee000/0x0/0x4ffc00000, data 0x18c630d/0x199e000, compress 0x0/0x0/0x0, omap 0x1cdbd, meta 0x3d53243), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 135 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 97542144 unmapped: 12566528 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa4ee000/0x0/0x4ffc00000, data 0x18c630d/0x199e000, compress 0x0/0x0/0x0, omap 0x1cdbd, meta 0x3d53243), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:32.216527+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212551 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 11468800 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:33.216782+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 11468800 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:34.216950+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 11296768 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:35.217159+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 11296768 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa4d2000/0x0/0x4ffc00000, data 0x18dcf5a/0x19b7000, compress 0x0/0x0/0x0, omap 0x1d283, meta 0x3d52d7d), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:36.217383+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa4d0000/0x0/0x4ffc00000, data 0x18dec61/0x19b9000, compress 0x0/0x0/0x0, omap 0x1d283, meta 0x3d52d7d), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 11296768 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:37.217549+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218027 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa4d0000/0x0/0x4ffc00000, data 0x18dec61/0x19b9000, compress 0x0/0x0/0x0, omap 0x1d283, meta 0x3d52d7d), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 11296768 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:38.217697+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98443264 unmapped: 11665408 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:39.217850+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98443264 unmapped: 11665408 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:40.217994+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.923088074s of 11.076400757s, submitted: 38
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 11550720 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:41.218146+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 11550720 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:42.218271+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218883 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 11550720 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa4a8000/0x0/0x4ffc00000, data 0x190a147/0x19e4000, compress 0x0/0x0/0x0, omap 0x1d4db, meta 0x3d52b25), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:43.218397+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 11370496 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:44.218631+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa4a8000/0x0/0x4ffc00000, data 0x190a0e5/0x19e3000, compress 0x0/0x0/0x0, omap 0x1d571, meta 0x3d52a8f), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 11231232 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:45.218844+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98885632 unmapped: 11223040 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:46.219005+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa479000/0x0/0x4ffc00000, data 0x1938d69/0x1a11000, compress 0x0/0x0/0x0, omap 0x1d8f5, meta 0x3d5270b), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98304000 unmapped: 11804672 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:47.219198+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218923 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98304000 unmapped: 11804672 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:48.219327+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98394112 unmapped: 11714560 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:49.219498+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98615296 unmapped: 11493376 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:50.219631+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 98615296 unmapped: 11493376 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.679545403s of 10.544363976s, submitted: 67
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:51.219926+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 9756672 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x19b512c/0x1a8e000, compress 0x0/0x0/0x0, omap 0x1bff2, meta 0x3d5400e), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:52.220126+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228769 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x19b512c/0x1a8e000, compress 0x0/0x0/0x0, omap 0x1bff2, meta 0x3d5400e), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100196352 unmapped: 9912320 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:53.220306+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100196352 unmapped: 9912320 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:54.220449+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 9895936 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa3fb000/0x0/0x4ffc00000, data 0x19b85d4/0x1a91000, compress 0x0/0x0/0x0, omap 0x1be97, meta 0x3d54169), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:55.220651+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100253696 unmapped: 9854976 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:56.220878+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 9846784 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:57.221017+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa3dd000/0x0/0x4ffc00000, data 0x19d5af4/0x1aaf000, compress 0x0/0x0/0x0, omap 0x1be97, meta 0x3d54169), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226787 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 9846784 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:58.221173+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 10354688 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:59.221346+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 10354688 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:00.221481+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 99950592 unmapped: 10158080 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.632919312s of 10.113967896s, submitted: 56
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:01.221725+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100122624 unmapped: 9986048 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:02.221960+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229779 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100122624 unmapped: 9986048 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa3a2000/0x0/0x4ffc00000, data 0x1a11b1f/0x1aea000, compress 0x0/0x0/0x0, omap 0x1bfc3, meta 0x3d5403d), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:03.222118+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 9887744 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:04.222254+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 9674752 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:05.222395+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 9166848 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:06.222507+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa374000/0x0/0x4ffc00000, data 0x1a3f2d9/0x1b18000, compress 0x0/0x0/0x0, omap 0x1bfc3, meta 0x3d5403d), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 9142272 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:07.222636+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232647 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 8962048 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:08.222959+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa354000/0x0/0x4ffc00000, data 0x1a5e928/0x1b37000, compress 0x0/0x0/0x0, omap 0x1bfc3, meta 0x3d5403d), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 8962048 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:09.223114+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 101376000 unmapped: 8732672 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:10.223331+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 101572608 unmapped: 8536064 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:11.223540+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.970051765s of 10.277941704s, submitted: 79
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 101572608 unmapped: 8536064 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:12.223703+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa310000/0x0/0x4ffc00000, data 0x1aa1462/0x1b7c000, compress 0x0/0x0/0x0, omap 0x1bfc3, meta 0x3d5403d), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240981 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 8298496 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:13.223843+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 101236736 unmapped: 8871936 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:14.223986+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944d948000
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 8691712 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944d949c00
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:15.224112+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Got map version 12
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 7192576 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:16.224316+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 103161856 unmapped: 6946816 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa2a4000/0x0/0x4ffc00000, data 0x1b0ac75/0x1be8000, compress 0x0/0x0/0x0, omap 0x1c0ef, meta 0x3d53f11), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:17.224510+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252477 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 6914048 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:18.224691+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 6914048 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:19.224853+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 103358464 unmapped: 6750208 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:20.225039+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 6545408 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:21.225230+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa26d000/0x0/0x4ffc00000, data 0x1b464ec/0x1c1f000, compress 0x0/0x0/0x0, omap 0x1c0ef, meta 0x3d53f11), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.763832092s of 10.002084732s, submitted: 85
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 6537216 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:22.225386+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252385 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 7151616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:23.225635+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 7151616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:24.225816+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 7151616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:25.225966+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 7151616 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:26.226121+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa243000/0x0/0x4ffc00000, data 0x1b6ce12/0x1c47000, compress 0x0/0x0/0x0, omap 0x1c444, meta 0x3d53bbc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 103161856 unmapped: 6946816 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa22e000/0x0/0x4ffc00000, data 0x1b83bd0/0x1c5e000, compress 0x0/0x0/0x0, omap 0x1c444, meta 0x3d53bbc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:27.226293+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254653 data_alloc: 218103808 data_used: 13249
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 103161856 unmapped: 6946816 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:28.226487+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 7364608 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:29.226638+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 7364608 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:30.226843+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 7364608 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:31.227080+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.163558006s of 10.366992950s, submitted: 50
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 102825984 unmapped: 7282688 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:32.227253+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa210000/0x0/0x4ffc00000, data 0x1b9e8d3/0x1c7a000, compress 0x0/0x0/0x0, omap 0x1c720, meta 0x3d538e0), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262007 data_alloc: 218103808 data_used: 13249
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104185856 unmapped: 5922816 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:33.227452+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 5906432 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:34.227729+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104177664 unmapped: 5931008 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:35.227974+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 5906432 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:36.228172+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104226816 unmapped: 5881856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:37.228370+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa1c6000/0x0/0x4ffc00000, data 0x1beba98/0x1cc6000, compress 0x0/0x0/0x0, omap 0x1c720, meta 0x3d538e0), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262801 data_alloc: 218103808 data_used: 13253
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 6062080 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa1c7000/0x0/0x4ffc00000, data 0x1beba62/0x1cc5000, compress 0x0/0x0/0x0, omap 0x1c720, meta 0x3d538e0), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:38.228595+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104095744 unmapped: 6012928 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:39.228778+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa1b3000/0x0/0x4ffc00000, data 0x1bffcb3/0x1cd9000, compress 0x0/0x0/0x0, omap 0x1c720, meta 0x3d538e0), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104103936 unmapped: 6004736 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:40.228928+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104210432 unmapped: 5898240 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:41.229067+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.892628670s of 10.119803429s, submitted: 95
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 5808128 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:42.229197+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267505 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 5808128 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:43.229378+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa181000/0x0/0x4ffc00000, data 0x1c31283/0x1d0b000, compress 0x0/0x0/0x0, omap 0x1ccc2, meta 0x3d5333e), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 5890048 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:44.229755+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa181000/0x0/0x4ffc00000, data 0x1c31283/0x1d0b000, compress 0x0/0x0/0x0, omap 0x1ccc2, meta 0x3d5333e), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 5890048 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:45.230032+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104284160 unmapped: 5824512 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:46.230228+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104284160 unmapped: 5824512 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:47.230405+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266231 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104284160 unmapped: 5824512 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:48.230589+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104284160 unmapped: 5824512 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:49.230773+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 5791744 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:50.230931+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa161000/0x0/0x4ffc00000, data 0x1c4e329/0x1d29000, compress 0x0/0x0/0x0, omap 0x1cf87, meta 0x3d53079), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 5750784 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:51.231107+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 5750784 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:52.231256+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273061 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104513536 unmapped: 5595136 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:53.231414+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.602644920s of 11.791436195s, submitted: 71
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 104513536 unmapped: 5595136 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:54.231633+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa139000/0x0/0x4ffc00000, data 0x1c74ddd/0x1d51000, compress 0x0/0x0/0x0, omap 0x1d337, meta 0x3d52cc9), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105463808 unmapped: 4644864 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:55.231779+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 4579328 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:56.231932+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 4579328 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:57.232148+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280915 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 4497408 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:58.232348+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105676800 unmapped: 4431872 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:59.232526+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105676800 unmapped: 4431872 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:00.232667+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa0fd000/0x0/0x4ffc00000, data 0x1cad3eb/0x1d8c000, compress 0x0/0x0/0x0, omap 0x1d5e4, meta 0x3d52a1c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105676800 unmapped: 4431872 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:01.232857+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105029632 unmapped: 5079040 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:02.233028+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280491 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105029632 unmapped: 5079040 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:03.233253+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa0cf000/0x0/0x4ffc00000, data 0x1cde664/0x1dbd000, compress 0x0/0x0/0x0, omap 0x1d5e4, meta 0x3d52a1c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa0cf000/0x0/0x4ffc00000, data 0x1cde664/0x1dbd000, compress 0x0/0x0/0x0, omap 0x1d5e4, meta 0x3d52a1c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 5046272 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:04.233440+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.194561958s of 11.065369606s, submitted: 60
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 4907008 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:05.233633+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 4907008 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:06.233827+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 4907008 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:07.234009+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284335 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa0a9000/0x0/0x4ffc00000, data 0x1d038b2/0x1de2000, compress 0x0/0x0/0x0, omap 0x1d5e4, meta 0x3d52a1c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 4907008 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:08.234149+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa0a9000/0x0/0x4ffc00000, data 0x1d038b2/0x1de2000, compress 0x0/0x0/0x0, omap 0x1d5e4, meta 0x3d52a1c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105390080 unmapped: 4718592 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:09.234339+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:10.234494+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 105390080 unmapped: 4718592 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:11.234627+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 3465216 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:12.234847+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 3457024 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa06a000/0x0/0x4ffc00000, data 0x1d42237/0x1e22000, compress 0x0/0x0/0x0, omap 0x1d5e4, meta 0x3d52a1c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1290623 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:13.235024+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 3457024 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:14.235136+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 3252224 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.547468185s of 10.381765366s, submitted: 62
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:15.235251+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 3104768 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa04b000/0x0/0x4ffc00000, data 0x1d60a71/0x1e40000, compress 0x0/0x0/0x0, omap 0x1d5e4, meta 0x3d52a1c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:16.235444+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 2940928 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:17.235649+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 3022848 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288001 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:18.235866+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 3022848 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:19.236076+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107094016 unmapped: 3014656 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa002000/0x0/0x4ffc00000, data 0x1dabc5b/0x1e8a000, compress 0x0/0x0/0x0, omap 0x1da84, meta 0x3d5257c), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:20.236270+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 2809856 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:21.236656+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 2695168 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:22.236873+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 2695168 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304871 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9f82000/0x0/0x4ffc00000, data 0x1e2780a/0x1f08000, compress 0x0/0x0/0x0, omap 0x1da84, meta 0x3d5257c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:23.237000+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 1859584 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:24.237183+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 1859584 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:25.237382+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 1859584 heap: 110108672 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.868585587s of 10.334239960s, submitted: 88
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:26.237500+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 2670592 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:27.237671+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 2670592 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311337 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:28.237800+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9f08000/0x0/0x4ffc00000, data 0x1ea1267/0x1f83000, compress 0x0/0x0/0x0, omap 0x1da84, meta 0x3d5257c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 2555904 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:29.237950+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 3440640 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:30.238114+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 3440640 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 ms_handle_reset con 0x55944d948000 session 0x55944acbea80
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 ms_handle_reset con 0x55944d949c00 session 0x55944d3ee8c0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9eff000/0x0/0x4ffc00000, data 0x1eacb77/0x1f8d000, compress 0x0/0x0/0x0, omap 0x1da84, meta 0x3d5257c), peers [0,2] op hist [0,0,0,0,0,3])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:31.238416+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 109543424 unmapped: 1613824 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9edd000/0x0/0x4ffc00000, data 0x1ecbeb2/0x1fad000, compress 0x0/0x0/0x0, omap 0x1da84, meta 0x3d5257c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:32.238690+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 1261568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Got map version 13
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310895 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:33.238884+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 1171456 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:34.239156+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 1220608 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9e8d000/0x0/0x4ffc00000, data 0x1f1b991/0x1ffe000, compress 0x0/0x0/0x0, omap 0x1da84, meta 0x3d5257c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:35.239340+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 925696 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.739799500s of 10.003137589s, submitted: 314
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:36.239595+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 925696 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:37.239712+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 876544 heap: 112205824 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323303 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:38.239835+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 548864 heap: 112205824 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9ded000/0x0/0x4ffc00000, data 0x1fbb818/0x209d000, compress 0x0/0x0/0x0, omap 0x1da84, meta 0x3d5257c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:39.240158+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 540672 heap: 112205824 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:40.240293+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 112713728 unmapped: 540672 heap: 113254400 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:41.240703+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 475136 heap: 113254400 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:42.240851+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 1507328 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9d9a000/0x0/0x4ffc00000, data 0x200e8f4/0x20f0000, compress 0x0/0x0/0x0, omap 0x1ed04, meta 0x3d512fc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329299 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:43.240993+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 1507328 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:44.241106+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 1507328 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:45.241227+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 112885760 unmapped: 1417216 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.323970795s of 10.002117157s, submitted: 111
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:46.241356+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 262144 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:47.241601+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 122880 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325725 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9d60000/0x0/0x4ffc00000, data 0x204b38f/0x212c000, compress 0x0/0x0/0x0, omap 0x1ed04, meta 0x3d512fc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:48.241756+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 122880 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:49.241897+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 122880 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:50.242081+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 122880 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:51.242320+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 1097728 heap: 115351552 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:52.242462+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 1097728 heap: 115351552 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331523 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:53.242628+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 1630208 heap: 115351552 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9ced000/0x0/0x4ffc00000, data 0x20c02b3/0x219f000, compress 0x0/0x0/0x0, omap 0x1ed04, meta 0x3d512fc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:54.242737+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 1597440 heap: 115351552 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:55.242856+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 1597440 heap: 115351552 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.663146019s of 10.544254303s, submitted: 83
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:56.242984+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 1531904 heap: 115351552 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:57.243193+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 1531904 heap: 116400128 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343355 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f8ae0000/0x0/0x4ffc00000, data 0x2129bb0/0x220b000, compress 0x0/0x0/0x0, omap 0x1ed04, meta 0x4ef12fc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:58.243320+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 2580480 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:59.243682+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 115105792 unmapped: 2342912 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:00.243778+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 115105792 unmapped: 2342912 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:01.243935+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 2072576 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f8a97000/0x0/0x4ffc00000, data 0x21736d7/0x2254000, compress 0x0/0x0/0x0, omap 0x1ed04, meta 0x4ef12fc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:02.244059+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 1892352 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339227 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:03.244170+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 115564544 unmapped: 1884160 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:04.244339+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 115564544 unmapped: 1884160 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:05.244477+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 1933312 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.875925064s of 10.000294685s, submitted: 74
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:06.244646+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f8a5d000/0x0/0x4ffc00000, data 0x21adb23/0x228e000, compress 0x0/0x0/0x0, omap 0x1ed04, meta 0x4ef12fc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 860160 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:07.244854+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 860160 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341421 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:08.245041+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 786432 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:09.245124+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 778240 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:10.245279+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 778240 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:11.245532+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 778240 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ab000/0x0/0x4ffc00000, data 0x21bf83a/0x229f000, compress 0x0/0x0/0x0, omap 0x22484, meta 0x608db7c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:12.245783+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 770048 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343289 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:13.245924+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 770048 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:14.246149+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 753664 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21bf86d/0x229e000, compress 0x0/0x0/0x0, omap 0x22484, meta 0x608db7c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:15.246280+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 753664 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21bf86d/0x229e000, compress 0x0/0x0/0x0, omap 0x22484, meta 0x608db7c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.615035057s of 10.021672249s, submitted: 32
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:16.246388+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 753664 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:17.246628+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 753664 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342715 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:18.246744+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 753664 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:19.246892+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 745472 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:20.247102+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 745472 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ae000/0x0/0x4ffc00000, data 0x21bf969/0x229e000, compress 0x0/0x0/0x0, omap 0x225ac, meta 0x608da54), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:21.247257+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 745472 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:22.247380+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 745472 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341597 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:23.247515+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 745472 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ad000/0x0/0x4ffc00000, data 0x21bf937/0x229e000, compress 0x0/0x0/0x0, omap 0x22321, meta 0x608dcdf), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:24.247665+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 745472 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:25.247811+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 745472 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:26.247996+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.763580322s of 10.252605438s, submitted: 23
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117760000 unmapped: 737280 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:27.248166+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117760000 unmapped: 737280 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343145 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:28.248317+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117760000 unmapped: 737280 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ab000/0x0/0x4ffc00000, data 0x21bfb5e/0x229f000, compress 0x0/0x0/0x0, omap 0x23561, meta 0x608ca9f), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ab000/0x0/0x4ffc00000, data 0x21bfb5e/0x229f000, compress 0x0/0x0/0x0, omap 0x23561, meta 0x608ca9f), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:29.248497+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117760000 unmapped: 737280 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:30.248619+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117760000 unmapped: 737280 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ad000/0x0/0x4ffc00000, data 0x21bfc28/0x229f000, compress 0x0/0x0/0x0, omap 0x23561, meta 0x608ca9f), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:31.248773+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117760000 unmapped: 737280 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:32.248900+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117768192 unmapped: 729088 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342523 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:33.249036+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 720896 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:34.249284+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 720896 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ad000/0x0/0x4ffc00000, data 0x21bfbc3/0x229e000, compress 0x0/0x0/0x0, omap 0x23685, meta 0x608c97b), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:35.249417+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 720896 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:36.249644+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 720896 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.369353294s of 10.191442490s, submitted: 32
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:37.249761+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 720896 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1344071 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:38.249940+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ad000/0x0/0x4ffc00000, data 0x21bfcc3/0x229f000, compress 0x0/0x0/0x0, omap 0x237a9, meta 0x608c857), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117784576 unmapped: 712704 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:39.250130+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21bfcfa/0x229f000, compress 0x0/0x0/0x0, omap 0x237a9, meta 0x608c857), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 679936 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:40.250311+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 679936 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:41.250605+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 671744 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:42.250766+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 671744 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21bfd2d/0x229f000, compress 0x0/0x0/0x0, omap 0x237a9, meta 0x608c857), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1344103 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:43.250983+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 630784 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:44.251279+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 630784 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:45.251496+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 630784 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:46.251649+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.359336853s of 10.002920151s, submitted: 29
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 630784 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:47.251871+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 630784 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345189 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:48.252095+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 630784 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21bfec0/0x229f000, compress 0x0/0x0/0x0, omap 0x249e9, meta 0x608b617), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:49.252292+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 630784 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21bfec0/0x229f000, compress 0x0/0x0/0x0, omap 0x249e9, meta 0x608b617), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:50.252545+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 630784 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:51.252837+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 622592 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:52.253050+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 622592 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ad000/0x0/0x4ffc00000, data 0x21bff87/0x229e000, compress 0x0/0x0/0x0, omap 0x249e9, meta 0x608b617), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:53.253258+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1344503 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 622592 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:54.253469+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ad000/0x0/0x4ffc00000, data 0x21bff87/0x229e000, compress 0x0/0x0/0x0, omap 0x249e9, meta 0x608b617), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 622592 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:55.253687+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 1654784 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:56.253836+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3238 syncs, 3.51 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4350 writes, 15K keys, 4350 commit groups, 1.0 writes per commit group, ingest: 20.29 MB, 0.03 MB/s
                                           Interval WAL: 4350 writes, 1874 syncs, 2.32 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 1646592 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.196742058s of 10.349185944s, submitted: 31
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:57.253976+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 1638400 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:58.254180+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345843 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 1638400 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21c00b3/0x229f000, compress 0x0/0x0/0x0, omap 0x24b0d, meta 0x608b4f3), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:59.254351+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 1622016 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21c00b3/0x229f000, compress 0x0/0x0/0x0, omap 0x24b0d, meta 0x608b4f3), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21c00b3/0x229f000, compress 0x0/0x0/0x0, omap 0x24b0d, meta 0x608b4f3), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:00.254490+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 1622016 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:01.254706+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 1622016 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:02.254865+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 1622016 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 ms_handle_reset con 0x559449cd6800 session 0x559448b4c000
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944adab800
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ad000/0x0/0x4ffc00000, data 0x21c0083/0x229e000, compress 0x0/0x0/0x0, omap 0x2af24, meta 0x60850dc), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:03.255110+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347249 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 1622016 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc ms_handle_reset ms_handle_reset con 0x55944a35f400
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4161918394
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: get_auth_request con 0x55944d8fac00 auth_method 0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_configure stats_period=5
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:04.255253+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 ms_handle_reset con 0x55944adaa800 session 0x55944c8f88c0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944d17b400
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 ms_handle_reset con 0x55944a48a000 session 0x55944d190fc0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944adaa000
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 1638400 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:05.255491+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 1638400 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:06.255650+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 1638400 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:07.255805+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 1638400 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:08.255977+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346801 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ad000/0x0/0x4ffc00000, data 0x21c011d/0x229f000, compress 0x0/0x0/0x0, omap 0x2b446, meta 0x6084bba), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.499647141s of 11.616078377s, submitted: 24
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 1638400 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:09.256183+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 1638400 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:10.256343+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 1638400 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:11.256513+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ad000/0x0/0x4ffc00000, data 0x21c0151/0x229f000, compress 0x0/0x0/0x0, omap 0x2b720, meta 0x60848e0), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 1630208 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:12.256694+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 1597440 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:13.256815+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348111 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 1597440 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:14.256955+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 1597440 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:15.257117+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 1597440 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78aa000/0x0/0x4ffc00000, data 0x21c0278/0x229f000, compress 0x0/0x0/0x0, omap 0x2bc42, meta 0x60843be), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:16.257281+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 1597440 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:17.257520+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 1597440 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f78ac000/0x0/0x4ffc00000, data 0x21c02ed/0x22a0000, compress 0x0/0x0/0x0, omap 0x2bd1d, meta 0x60842e3), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 142 handle_osd_map epochs [143,143], i have 143, src has [1,143]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:18.257693+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352801 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 1597440 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.166779518s of 10.339650154s, submitted: 21
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:19.257871+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f78a7000/0x0/0x4ffc00000, data 0x21c1ef2/0x22a3000, compress 0x0/0x0/0x0, omap 0x2c10b, meta 0x6083ef5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 1581056 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:20.258018+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f78a7000/0x0/0x4ffc00000, data 0x21c1ef2/0x22a3000, compress 0x0/0x0/0x0, omap 0x2c39c, meta 0x6083c64), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 1581056 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:21.258221+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f78a7000/0x0/0x4ffc00000, data 0x21c1f57/0x22a3000, compress 0x0/0x0/0x0, omap 0x2c39c, meta 0x6083c64), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 1548288 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:22.258424+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 1548288 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:23.258604+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352401 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 1531904 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:24.258743+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 1531904 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:25.258868+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f78a8000/0x0/0x4ffc00000, data 0x21c1f46/0x22a3000, compress 0x0/0x0/0x0, omap 0x2c79a, meta 0x6083866), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 1531904 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:26.259068+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 1531904 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:27.259211+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118022144 unmapped: 1523712 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:28.259374+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352481 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118022144 unmapped: 1523712 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:29.259517+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.936616898s of 10.705034256s, submitted: 74
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 1531904 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f78a4000/0x0/0x4ffc00000, data 0x21c3afd/0x22a5000, compress 0x0/0x0/0x0, omap 0x2d08a, meta 0x6082f76), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:30.259658+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 1531904 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:31.260137+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f78a8000/0x0/0x4ffc00000, data 0x21c3b91/0x22a4000, compress 0x0/0x0/0x0, omap 0x2d43f, meta 0x6082bc1), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:32.260392+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:33.260618+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354649 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:34.260808+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f78a7000/0x0/0x4ffc00000, data 0x21c3bc4/0x22a4000, compress 0x0/0x0/0x0, omap 0x2d719, meta 0x60828e7), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:35.260970+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:36.261154+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:37.261296+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:38.261482+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356629 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f78a6000/0x0/0x4ffc00000, data 0x21c3dc0/0x22a6000, compress 0x0/0x0/0x0, omap 0x2dc3b, meta 0x60823c5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:39.261618+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.980519295s of 10.002510071s, submitted: 24
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:40.261800+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:41.262010+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:42.262199+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f78a2000/0x0/0x4ffc00000, data 0x21c3eba/0x22a7000, compress 0x0/0x0/0x0, omap 0x2df5e, meta 0x60820a2), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:43.262374+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1358321 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 1482752 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:44.262519+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944a48a000
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118079488 unmapped: 1466368 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:45.262652+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118087680 unmapped: 1458176 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:46.262824+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118087680 unmapped: 1458176 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:47.262970+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Got map version 14
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118177792 unmapped: 1368064 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f78a4000/0x0/0x4ffc00000, data 0x21c40ba/0x22a8000, compress 0x0/0x0/0x0, omap 0x2e512, meta 0x6081aee), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:48.263108+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362463 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118185984 unmapped: 1359872 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:49.263295+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.368686676s of 10.002033234s, submitted: 38
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118185984 unmapped: 1359872 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:50.263442+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118194176 unmapped: 1351680 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:51.263675+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118202368 unmapped: 1343488 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:52.263822+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118218752 unmapped: 1327104 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:53.263980+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1372135 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7878000/0x0/0x4ffc00000, data 0x21ed6df/0x22d2000, compress 0x0/0x0/0x0, omap 0x2ec33, meta 0x60813cd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 1556480 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:54.264224+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 1531904 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 ms_handle_reset con 0x55944a4b8000 session 0x55944b310540
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944d949800
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:55.264391+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 1507328 heap: 120594432 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:56.264545+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7827000/0x0/0x4ffc00000, data 0x223fe5c/0x2324000, compress 0x0/0x0/0x0, omap 0x2eda0, meta 0x6081260), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 119472128 unmapped: 1122304 heap: 120594432 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f77fb000/0x0/0x4ffc00000, data 0x226cebf/0x2351000, compress 0x0/0x0/0x0, omap 0x2ee32, meta 0x60811ce), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:57.264743+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f77fb000/0x0/0x4ffc00000, data 0x226cebf/0x2351000, compress 0x0/0x0/0x0, omap 0x2ee32, meta 0x60811ce), peers [0,2] op hist [0,0,0,0,0,0,0,0,1,1])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 119480320 unmapped: 1114112 heap: 120594432 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f77ef000/0x0/0x4ffc00000, data 0x2279b91/0x235d000, compress 0x0/0x0/0x0, omap 0x2eec4, meta 0x608113c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:58.265026+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1378845 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 119619584 unmapped: 974848 heap: 120594432 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:59.265200+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.478974342s of 10.002267838s, submitted: 104
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 1933312 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:00.265367+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 1933312 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:01.265543+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 950272 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:02.265686+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 121036800 unmapped: 606208 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:03.265909+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384765 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 121036800 unmapped: 606208 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7732000/0x0/0x4ffc00000, data 0x233650c/0x241a000, compress 0x0/0x0/0x0, omap 0x2f677, meta 0x6080989), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:04.266067+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 120987648 unmapped: 655360 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:05.266291+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 1368064 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:06.266438+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 120414208 unmapped: 2277376 heap: 122691584 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:07.266603+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 120209408 unmapped: 2482176 heap: 122691584 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f76bf000/0x0/0x4ffc00000, data 0x23a68fb/0x248c000, compress 0x0/0x0/0x0, omap 0x2fd98, meta 0x6080268), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:08.266802+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1394903 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 2105344 heap: 122691584 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:09.266983+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.516779900s of 10.085855484s, submitted: 151
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 121659392 unmapped: 1032192 heap: 122691584 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:10.267138+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 121692160 unmapped: 2048000 heap: 123740160 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:11.267342+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 2580480 heap: 123740160 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:12.267489+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 121217024 unmapped: 2523136 heap: 123740160 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f764b000/0x0/0x4ffc00000, data 0x241c021/0x2501000, compress 0x0/0x0/0x0, omap 0x301df, meta 0x607fe21), peers [0,2] op hist [0,0,0,0,0,0,3])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:13.267697+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406365 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 2424832 heap: 123740160 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:14.271666+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 121634816 unmapped: 2105344 heap: 123740160 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f75f4000/0x0/0x4ffc00000, data 0x2471cde/0x2557000, compress 0x0/0x0/0x0, omap 0x3034c, meta 0x607fcb4), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:15.271804+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f75f4000/0x0/0x4ffc00000, data 0x2471cde/0x2557000, compress 0x0/0x0/0x0, omap 0x3034c, meta 0x607fcb4), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 121634816 unmapped: 2105344 heap: 123740160 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:16.272106+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122429440 unmapped: 2359296 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:17.272239+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7598000/0x0/0x4ffc00000, data 0x24ce02c/0x25b3000, compress 0x0/0x0/0x0, omap 0x304b9, meta 0x607fb47), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 1966080 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:18.272381+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401013 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 1966080 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:19.272523+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 1966080 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:20.272705+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7599000/0x0/0x4ffc00000, data 0x24ce091/0x25b3000, compress 0x0/0x0/0x0, omap 0x30626, meta 0x607f9da), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.475150108s of 10.776302338s, submitted: 134
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123215872 unmapped: 1572864 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:21.272881+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123215872 unmapped: 1572864 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:22.273083+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123215872 unmapped: 1572864 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f755e000/0x0/0x4ffc00000, data 0x250a6bf/0x25ee000, compress 0x0/0x0/0x0, omap 0x30ab6, meta 0x607f54a), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:23.273255+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1404895 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122552320 unmapped: 2236416 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:24.273424+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122552320 unmapped: 2236416 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:25.273583+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f755e000/0x0/0x4ffc00000, data 0x250a6bf/0x25ee000, compress 0x0/0x0/0x0, omap 0x30b48, meta 0x607f4b8), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 3104768 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:26.273786+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7536000/0x0/0x4ffc00000, data 0x2532350/0x2616000, compress 0x0/0x0/0x0, omap 0x30b48, meta 0x607f4b8), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 3014656 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:27.273946+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 3006464 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:28.274089+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405687 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 3006464 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:29.274255+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122986496 unmapped: 2850816 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:30.274397+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.857073784s of 10.541455269s, submitted: 34
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122986496 unmapped: 2850816 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:31.274634+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f751d000/0x0/0x4ffc00000, data 0x254b608/0x262f000, compress 0x0/0x0/0x0, omap 0x30eb4, meta 0x607f14c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 122986496 unmapped: 2850816 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:32.274856+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123068416 unmapped: 2768896 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:33.274998+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405681 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 2686976 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:34.275168+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123183104 unmapped: 2654208 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:35.275298+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123183104 unmapped: 2654208 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7505000/0x0/0x4ffc00000, data 0x2564939/0x2647000, compress 0x0/0x0/0x0, omap 0x31220, meta 0x607ede0), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:36.275437+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 2457600 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:37.275656+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 2514944 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:38.275819+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409723 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 2424832 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:39.275956+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f74ec000/0x0/0x4ffc00000, data 0x257a929/0x265e000, compress 0x0/0x0/0x0, omap 0x31127, meta 0x607eed9), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 2424832 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:40.276117+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 2424832 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.260945320s of 10.114643097s, submitted: 44
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:41.276401+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 1359872 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:42.276614+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f74df000/0x0/0x4ffc00000, data 0x2589e89/0x266d000, compress 0x0/0x0/0x0, omap 0x31127, meta 0x607eed9), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 1359872 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:43.276750+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412961 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 1335296 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:44.277078+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 1269760 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:45.277287+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f74da000/0x0/0x4ffc00000, data 0x258b908/0x2670000, compress 0x0/0x0/0x0, omap 0x3149e, meta 0x607eb62), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f74da000/0x0/0x4ffc00000, data 0x258b908/0x2670000, compress 0x0/0x0/0x0, omap 0x3149e, meta 0x607eb62), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 1269760 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:46.277476+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 1679360 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:47.277664+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 1679360 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:48.277817+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412961 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 1679360 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:49.278001+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f74ca000/0x0/0x4ffc00000, data 0x259d6bc/0x2682000, compress 0x0/0x0/0x0, omap 0x31bd0, meta 0x607e430), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 2727936 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:50.278182+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 2654208 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.887678146s of 10.001625061s, submitted: 29
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:51.278379+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 2654208 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:52.278935+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 2654208 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:53.279120+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f74bb000/0x0/0x4ffc00000, data 0x25ac412/0x2691000, compress 0x0/0x0/0x0, omap 0x31d3d, meta 0x607e2c3), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412345 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f74bb000/0x0/0x4ffc00000, data 0x25ac412/0x2691000, compress 0x0/0x0/0x0, omap 0x31d3d, meta 0x607e2c3), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 2555904 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:54.279252+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 2555904 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:55.279398+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 2555904 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:56.279555+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124559360 unmapped: 2326528 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:57.279819+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f7475000/0x0/0x4ffc00000, data 0x25f281e/0x26d7000, compress 0x0/0x0/0x0, omap 0x3213b, meta 0x607dec5), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 2318336 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:58.279992+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1419971 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 2318336 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:59.280163+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f7475000/0x0/0x4ffc00000, data 0x25f2883/0x26d7000, compress 0x0/0x0/0x0, omap 0x32216, meta 0x607ddea), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 2187264 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:00.280331+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.600919724s of 10.001583099s, submitted: 23
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 2187264 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:01.280514+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124788736 unmapped: 2097152 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:02.280671+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123600896 unmapped: 3284992 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:03.280798+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1428087 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123625472 unmapped: 3260416 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:04.280931+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 123633664 unmapped: 3252224 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:05.281065+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f742d000/0x0/0x4ffc00000, data 0x2637a16/0x271f000, compress 0x0/0x0/0x0, omap 0x326ef, meta 0x607d911), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124796928 unmapped: 2088960 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:06.281203+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124985344 unmapped: 1900544 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:07.281403+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 124985344 unmapped: 1900544 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:08.281547+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1428321 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125157376 unmapped: 1728512 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:09.281733+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125157376 unmapped: 1728512 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:10.281911+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.224463463s of 10.038451195s, submitted: 48
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125157376 unmapped: 1728512 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:11.282334+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f73f7000/0x0/0x4ffc00000, data 0x266e336/0x2755000, compress 0x0/0x0/0x0, omap 0x32656, meta 0x607d9aa), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125157376 unmapped: 1728512 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:12.282602+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 1564672 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:13.282850+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430965 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 1564672 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:14.282988+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f73d4000/0x0/0x4ffc00000, data 0x268dc71/0x2776000, compress 0x0/0x0/0x0, omap 0x32ac5, meta 0x607d53b), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 1564672 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:15.283137+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 1515520 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:16.283273+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 1515520 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:17.283435+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 1515520 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:18.283615+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1433203 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 1507328 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:19.283773+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125566976 unmapped: 1318912 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:20.283938+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f73a6000/0x0/0x4ffc00000, data 0x26b8deb/0x27a4000, compress 0x0/0x0/0x0, omap 0x32f53, meta 0x607d0ad), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125509632 unmapped: 2424832 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:21.284127+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f7379000/0x0/0x4ffc00000, data 0x26e7b1a/0x27d3000, compress 0x0/0x0/0x0, omap 0x3383e, meta 0x607c7c2), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.544516563s of 10.881173134s, submitted: 74
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 2416640 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:22.284387+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 2416640 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:23.284621+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1440229 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125714432 unmapped: 2220032 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:24.284811+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f7379000/0x0/0x4ffc00000, data 0x26e7b49/0x27d2000, compress 0x0/0x0/0x0, omap 0x33962, meta 0x607c69e), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125714432 unmapped: 2220032 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:25.284995+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125657088 unmapped: 2277376 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:26.285203+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f7363000/0x0/0x4ffc00000, data 0x26fe25a/0x27e9000, compress 0x0/0x0/0x0, omap 0x33962, meta 0x607c69e), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 2686976 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:27.285400+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 2686976 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:28.285539+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445573 data_alloc: 218103808 data_used: 13249
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 1245184 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:29.285694+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 126959616 unmapped: 974848 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:30.285841+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 126959616 unmapped: 2023424 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:31.286003+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f7300000/0x0/0x4ffc00000, data 0x275f3af/0x284c000, compress 0x0/0x0/0x0, omap 0x3438e, meta 0x607bc72), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:32.286156+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 126959616 unmapped: 2023424 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.106158257s of 10.395409584s, submitted: 62
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f7300000/0x0/0x4ffc00000, data 0x275f3af/0x284c000, compress 0x0/0x0/0x0, omap 0x34469, meta 0x607bb97), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:33.286297+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 127303680 unmapped: 1679360 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451779 data_alloc: 218103808 data_used: 13249
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:34.286452+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 1597440 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:35.286622+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 127393792 unmapped: 1589248 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f72d8000/0x0/0x4ffc00000, data 0x27843c2/0x2872000, compress 0x0/0x0/0x0, omap 0x34ad7, meta 0x607b529), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:36.286756+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 127598592 unmapped: 1384448 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f7287000/0x0/0x4ffc00000, data 0x27d6e41/0x28c5000, compress 0x0/0x0/0x0, omap 0x34d1f, meta 0x607b2e1), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:37.286895+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 127598592 unmapped: 1384448 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:38.287046+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 127598592 unmapped: 1384448 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1453553 data_alloc: 218103808 data_used: 13249
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:39.287188+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 126926848 unmapped: 2056192 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:40.287313+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 126926848 unmapped: 2056192 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:41.287461+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 126926848 unmapped: 2056192 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f7252000/0x0/0x4ffc00000, data 0x280cd0c/0x28fa000, compress 0x0/0x0/0x0, omap 0x354d2, meta 0x607ab2e), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:42.287603+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 3014656 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.356106758s of 10.192037582s, submitted: 61
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:43.287730+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 129130496 unmapped: 1949696 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1463525 data_alloc: 218103808 data_used: 13249
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:44.287855+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 1892352 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:45.287988+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 1654784 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:46.288131+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 2940928 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f71e7000/0x0/0x4ffc00000, data 0x28750e2/0x2965000, compress 0x0/0x0/0x0, omap 0x35962, meta 0x607a69e), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:47.288273+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 128212992 unmapped: 2867200 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:48.288391+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 2859008 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1463183 data_alloc: 218103808 data_used: 13249
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:49.288517+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 2850816 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:50.288647+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 3620864 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:51.288784+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 2310144 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:52.288916+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f715e000/0x0/0x4ffc00000, data 0x28fbfc1/0x29ec000, compress 0x0/0x0/0x0, omap 0x3638f, meta 0x6079c71), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 2310144 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.403574944s of 10.003150940s, submitted: 83
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:53.289030+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 2195456 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471809 data_alloc: 218103808 data_used: 13249
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:54.289149+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 130277376 unmapped: 1851392 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:55.289293+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 1843200 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:56.289439+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 130293760 unmapped: 1835008 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f70fa000/0x0/0x4ffc00000, data 0x2962b7a/0x2a51000, compress 0x0/0x0/0x0, omap 0x36a67, meta 0x6079599), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:57.289588+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 130228224 unmapped: 1900544 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f70fa000/0x0/0x4ffc00000, data 0x2962b7a/0x2a51000, compress 0x0/0x0/0x0, omap 0x36b8b, meta 0x6079475), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:58.289747+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 2768896 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1480545 data_alloc: 218103808 data_used: 13249
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:59.289885+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 130457600 unmapped: 2719744 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:00.290027+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 129736704 unmapped: 3440640 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f70b5000/0x0/0x4ffc00000, data 0x29a479d/0x2a95000, compress 0x0/0x0/0x0, omap 0x376fb, meta 0x6078905), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:01.290183+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 3293184 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:02.290318+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 131153920 unmapped: 2023424 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.886514187s of 10.002050400s, submitted: 99
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Got map version 15
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:03.290467+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 1376256 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1487345 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:04.290610+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 2424832 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f7053000/0x0/0x4ffc00000, data 0x2a09072/0x2af9000, compress 0x0/0x0/0x0, omap 0x375d4, meta 0x6078a2c), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:05.290757+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 2301952 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f7019000/0x0/0x4ffc00000, data 0x2a41d3a/0x2b33000, compress 0x0/0x0/0x0, omap 0x376ac, meta 0x6078954), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:06.290883+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132055040 unmapped: 2170880 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f6fee000/0x0/0x4ffc00000, data 0x2a6b7f7/0x2b5e000, compress 0x0/0x0/0x0, omap 0x3785c, meta 0x60787a4), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:07.291136+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 2162688 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:08.291287+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 2154496 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 154 handle_osd_map epochs [156,156], i have 154, src has [1,156]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 154 handle_osd_map epochs [155,156], i have 154, src has [1,156]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1502319 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:09.291472+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 4030464 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f6fb0000/0x0/0x4ffc00000, data 0x2aa8336/0x2b9a000, compress 0x0/0x0/0x0, omap 0x38185, meta 0x6077e7b), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:10.291663+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 4030464 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:11.291840+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f6f72000/0x0/0x4ffc00000, data 0x2ae593d/0x2bd8000, compress 0x0/0x0/0x0, omap 0x38455, meta 0x6077bab), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132538368 unmapped: 3784704 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:12.292175+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132841472 unmapped: 3481600 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.291734695s of 10.002635002s, submitted: 159
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:13.292365+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132841472 unmapped: 3481600 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f6f74000/0x0/0x4ffc00000, data 0x2ae5a07/0x2bd8000, compress 0x0/0x0/0x0, omap 0x38575, meta 0x6077a8b), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 156 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1506503 data_alloc: 218103808 data_used: 13098
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:14.292658+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132841472 unmapped: 3481600 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f6f6f000/0x0/0x4ffc00000, data 0x2ae74e6/0x2bdb000, compress 0x0/0x0/0x0, omap 0x3896f, meta 0x6077691), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:15.292906+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 132841472 unmapped: 3481600 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:16.293153+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134053888 unmapped: 2269184 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:17.293316+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134053888 unmapped: 2269184 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:18.293508+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134086656 unmapped: 2236416 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1513333 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:19.293713+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134086656 unmapped: 2236416 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:20.293897+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134086656 unmapped: 2236416 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f6f39000/0x0/0x4ffc00000, data 0x2b1c891/0x2c11000, compress 0x0/0x0/0x0, omap 0x38db7, meta 0x6077249), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:21.294126+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134127616 unmapped: 3244032 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:22.294270+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134127616 unmapped: 3244032 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.154506683s of 10.454253197s, submitted: 65
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:23.294422+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 158 handle_osd_map epochs [158,159], i have 159, src has [1,159]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 3481600 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1519225 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f6ef3000/0x0/0x4ffc00000, data 0x2b62258/0x2c58000, compress 0x0/0x0/0x0, omap 0x39087, meta 0x6076f79), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:24.294788+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 133824512 unmapped: 3547136 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:25.294948+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 133824512 unmapped: 3547136 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:26.295128+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f6ec8000/0x0/0x4ffc00000, data 0x2b8ada5/0x2c82000, compress 0x0/0x0/0x0, omap 0x3956c, meta 0x6076a94), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 133832704 unmapped: 3538944 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:27.295275+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 133832704 unmapped: 3538944 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:28.295529+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134881280 unmapped: 2490368 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1531411 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:29.295845+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 2523136 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:30.296011+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 134930432 unmapped: 2441216 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 160 ms_handle_reset con 0x55944a48a000 session 0x55944c4d4540
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f6e7c000/0x0/0x4ffc00000, data 0x2bd6838/0x2cd0000, compress 0x0/0x0/0x0, omap 0x39c3f, meta 0x60763c1), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:31.296229+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135446528 unmapped: 2973696 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 160 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6e3b000/0x0/0x4ffc00000, data 0x2c19312/0x2d11000, compress 0x0/0x0/0x0, omap 0x39ccf, meta 0x6076331), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:32.296396+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135454720 unmapped: 2965504 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Got map version 16
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.442367554s of 10.008372307s, submitted: 362
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:33.296636+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 2695168 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537411 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:34.296818+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 2695168 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:35.296962+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 2695168 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:36.297147+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135749632 unmapped: 2670592 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:37.297354+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135749632 unmapped: 2670592 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f6dee000/0x0/0x4ffc00000, data 0x2c643f0/0x2d5e000, compress 0x0/0x0/0x0, omap 0x3a901, meta 0x60756ff), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:38.297540+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135749632 unmapped: 2670592 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1536101 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:39.297748+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135749632 unmapped: 2670592 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:40.298031+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135749632 unmapped: 2670592 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:41.298300+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136806400 unmapped: 1613824 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:42.298625+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6dca000/0x0/0x4ffc00000, data 0x2c84a19/0x2d80000, compress 0x0/0x0/0x0, omap 0x3ac2e, meta 0x60753d2), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:43.298794+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541275 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:44.298959+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:45.299179+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6dca000/0x0/0x4ffc00000, data 0x2c84a19/0x2d80000, compress 0x0/0x0/0x0, omap 0x3ac2e, meta 0x60753d2), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:46.299599+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:47.299770+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:48.299912+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541275 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6dca000/0x0/0x4ffc00000, data 0x2c84a19/0x2d80000, compress 0x0/0x0/0x0, omap 0x3ac2e, meta 0x60753d2), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:49.300093+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:50.300283+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6dca000/0x0/0x4ffc00000, data 0x2c84a19/0x2d80000, compress 0x0/0x0/0x0, omap 0x3ac2e, meta 0x60753d2), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:51.300495+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6dca000/0x0/0x4ffc00000, data 0x2c84a19/0x2d80000, compress 0x0/0x0/0x0, omap 0x3ac2e, meta 0x60753d2), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:52.300699+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:53.300841+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541275 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:54.301035+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6dca000/0x0/0x4ffc00000, data 0x2c84a19/0x2d80000, compress 0x0/0x0/0x0, omap 0x3ac2e, meta 0x60753d2), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:55.301233+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:56.301430+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 1712128 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:57.301657+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 1703936 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:58.301803+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 1703936 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541275 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:59.301958+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 1703936 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:00.302541+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 1703936 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6dca000/0x0/0x4ffc00000, data 0x2c84a19/0x2d80000, compress 0x0/0x0/0x0, omap 0x3ac2e, meta 0x60753d2), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:01.302890+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 1703936 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:02.303060+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 1703936 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:03.303222+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 1703936 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.615039825s of 30.784770966s, submitted: 45
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541411 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:04.303411+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135528448 unmapped: 2891776 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:05.303652+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135528448 unmapped: 2891776 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:06.303794+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135528448 unmapped: 3940352 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6daa000/0x0/0x4ffc00000, data 0x2ca69f5/0x2da2000, compress 0x0/0x0/0x0, omap 0x3ae6e, meta 0x6075192), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:07.303921+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 3874816 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:08.304074+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 3874816 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:09.304229+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541711 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135626752 unmapped: 3842048 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:10.304389+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135626752 unmapped: 3842048 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:11.304634+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6d99000/0x0/0x4ffc00000, data 0x2cb8492/0x2db3000, compress 0x0/0x0/0x0, omap 0x3aefe, meta 0x6075102), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135626752 unmapped: 3842048 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:12.304869+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6d7b000/0x0/0x4ffc00000, data 0x2cd5957/0x2dd1000, compress 0x0/0x0/0x0, omap 0x3aefe, meta 0x6075102), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135626752 unmapped: 3842048 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:13.305017+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135659520 unmapped: 3809280 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.969015121s of 10.246096611s, submitted: 26
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:14.305254+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1548623 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135692288 unmapped: 3776512 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6d49000/0x0/0x4ffc00000, data 0x2d06bc6/0x2e03000, compress 0x0/0x0/0x0, omap 0x3b13e, meta 0x6074ec2), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:15.305465+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135790592 unmapped: 3678208 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6d49000/0x0/0x4ffc00000, data 0x2d06c2b/0x2e03000, compress 0x0/0x0/0x0, omap 0x3b1ce, meta 0x6074e32), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:16.305675+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135938048 unmapped: 3530752 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:17.305790+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135938048 unmapped: 3530752 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:18.305923+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135946240 unmapped: 3522560 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:19.306155+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1545941 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135946240 unmapped: 3522560 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6d1c000/0x0/0x4ffc00000, data 0x2d34e2b/0x2e30000, compress 0x0/0x0/0x0, omap 0x3b576, meta 0x6074a8a), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:20.306320+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6d1c000/0x0/0x4ffc00000, data 0x2d34e2b/0x2e30000, compress 0x0/0x0/0x0, omap 0x3b576, meta 0x6074a8a), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 3653632 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:21.306491+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135954432 unmapped: 3514368 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:22.306653+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135954432 unmapped: 3514368 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:23.306834+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135954432 unmapped: 3514368 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:24.307074+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551863 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135954432 unmapped: 3514368 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:25.307220+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 135954432 unmapped: 3514368 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f6cf4000/0x0/0x4ffc00000, data 0x2d59e53/0x2e56000, compress 0x0/0x0/0x0, omap 0x3b937, meta 0x60746c9), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.918767929s of 11.992917061s, submitted: 58
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:26.307412+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136118272 unmapped: 3350528 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:27.307686+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136118272 unmapped: 3350528 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:28.307902+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136118272 unmapped: 3350528 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:29.308108+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1552883 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136118272 unmapped: 3350528 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f6ce5000/0x0/0x4ffc00000, data 0x2d694af/0x2e65000, compress 0x0/0x0/0x0, omap 0x3b937, meta 0x60746c9), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,3])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:30.308334+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136118272 unmapped: 3350528 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:31.308606+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f6ce5000/0x0/0x4ffc00000, data 0x2d694af/0x2e65000, compress 0x0/0x0/0x0, omap 0x3b937, meta 0x60746c9), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,3])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136118272 unmapped: 3350528 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:32.308781+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136118272 unmapped: 3350528 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:33.309046+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.572957993s, txc = 0x55944b372c00, txc bytes = 2273, txc ios = 1, txc cost = 672273, txc onodes = 1, DB updates = 7, DB bytes = 6277, cost max = 95489052 on 2026-01-23T18:35:28.353611+0000, txc max = 104 on 2026-01-23T19:04:00.405895+0000
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 7.572911263s
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 7.572911263s
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.627645016s, txc = 0x55944b2c1b00, txc bytes = 25069, txc ios = 1, txc cost = 695069, txc onodes = 1, DB updates = 7, DB bytes = 30114, cost max = 95489052 on 2026-01-23T18:35:28.353611+0000, txc max = 104 on 2026-01-23T19:04:00.405895+0000
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.627498150s, txc = 0x55944c30c300, txc bytes = 40207, txc ios = 1, txc cost = 710207, txc onodes = 1, DB updates = 7, DB bytes = 42454, cost max = 95489052 on 2026-01-23T18:35:28.353611+0000, txc max = 104 on 2026-01-23T19:04:00.405895+0000
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136232960 unmapped: 3235840 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:34.309251+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551287 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136232960 unmapped: 3235840 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:35.309445+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136232960 unmapped: 3235840 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:36.310289+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136232960 unmapped: 3235840 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:37.310495+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f6cdb000/0x0/0x4ffc00000, data 0x2d74c99/0x2e71000, compress 0x0/0x0/0x0, omap 0x3b937, meta 0x60746c9), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136232960 unmapped: 3235840 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:38.310705+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.898449898s of 12.557981491s, submitted: 4
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 3227648 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:39.310876+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 3227648 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:40.311020+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 3227648 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:41.311249+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 3227648 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:42.311396+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 3227648 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _renew_subs
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:43.311633+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 3227648 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:44.311788+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 3227648 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:45.311915+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 3227648 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:46.312187+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:47.312378+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:48.312533+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:49.312689+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:50.312868+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:51.313095+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:52.313225+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:53.313365+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:54.313486+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:55.313719+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:56.313864+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:57.313995+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:58.314148+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:59.314272+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:00.314412+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 3219456 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:01.314603+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:02.314748+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:03.314944+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:04.315077+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:05.315309+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:06.315537+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:07.315779+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:08.315986+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:09.316143+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:10.316331+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:11.316531+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:12.316669+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:13.316802+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:14.316978+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:15.317111+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:16.317247+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:17.317376+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 3211264 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:18.317631+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 3203072 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:19.317752+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 3203072 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:20.317892+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:21.318071+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:22.318218+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:23.318350+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:24.318471+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:25.318642+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:26.318749+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:27.318926+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:28.319195+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd6000/0x0/0x4ffc00000, data 0x2d76718/0x2e74000, compress 0x0/0x0/0x0, omap 0x3bc0b, meta 0x60743f5), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:29.319389+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553725 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 3194880 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 51.413509369s of 51.424388885s, submitted: 13
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:30.319583+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 ms_handle_reset con 0x55944d947c00 session 0x55944d5c0380
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136839168 unmapped: 2629632 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:31.319731+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136839168 unmapped: 2629632 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:32.319862+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Got map version 17
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136863744 unmapped: 2605056 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:33.320025+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136863744 unmapped: 2605056 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:34.320142+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136863744 unmapped: 2605056 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:35.320265+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136863744 unmapped: 2605056 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:36.320379+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136863744 unmapped: 2605056 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:37.320513+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136863744 unmapped: 2605056 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:38.320620+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136863744 unmapped: 2605056 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:39.320811+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136863744 unmapped: 2605056 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:40.320941+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 136937472 unmapped: 2531328 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:41.321088+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'config diff' '{prefix=config diff}'
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'config show' '{prefix=config show}'
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'counter dump' '{prefix=counter dump}'
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'counter schema' '{prefix=counter schema}'
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137248768 unmapped: 3268608 heap: 140517376 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:42.321209+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 3186688 heap: 140517376 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:43.321358+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'log dump' '{prefix=log dump}'
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 148307968 unmapped: 3252224 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:44.321475+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'perf dump' '{prefix=perf dump}'
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'perf schema' '{prefix=perf schema}'
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:45.321610+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:46.321741+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:47.321896+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:48.322034+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:49.322156+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:50.322282+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:51.322413+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:52.322527+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:53.322627+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:54.322784+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:55.322926+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:56.323041+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:57.323188+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:58.323324+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:59.323448+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:00.323607+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:01.323759+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:02.323894+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:03.324022+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:04.324163+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:05.324294+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:06.324440+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:07.324609+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:08.325190+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:09.325479+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:10.325606+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:11.325751+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:12.326004+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:13.326380+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:14.326581+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:15.326817+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:16.327086+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:17.327336+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:18.327604+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:19.327896+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:20.328234+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:21.328605+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:22.328916+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:23.329064+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:24.329218+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:25.329357+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:26.329693+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:27.329865+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:28.330096+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:29.330228+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:30.330388+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:31.330666+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:32.330839+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:33.331059+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:34.331198+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:35.331439+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:36.331653+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:37.331858+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:38.332074+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:39.332224+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:40.332337+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:41.332534+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:42.332653+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:43.332781+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:44.332947+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:45.333052+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:46.333196+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:47.333296+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:48.333430+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:49.333653+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:50.333974+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:51.334275+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:52.334499+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:53.334742+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:54.334918+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:55.335118+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:56.335273+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:57.335400+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:58.335552+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:59.335771+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:00.335991+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:01.336257+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 14106624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:02.336531+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 14106624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:03.336837+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 14106624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:04.337154+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 14098432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:05.337365+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 14098432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:06.337609+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 14098432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:07.337832+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 14098432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:08.338002+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 14098432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:09.338197+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 14098432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:10.338452+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 14098432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:11.338599+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 14098432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:12.338742+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 14098432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:13.338874+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 14098432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:14.339029+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 14098432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:15.339174+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137478144 unmapped: 14082048 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:16.339368+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137478144 unmapped: 14082048 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:17.339523+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137478144 unmapped: 14082048 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:18.339667+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137478144 unmapped: 14082048 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:19.339838+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137478144 unmapped: 14082048 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:20.340038+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:21.340203+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:22.340386+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:23.340589+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:24.340781+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:25.340972+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:26.341117+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:27.341282+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:28.341428+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:29.341611+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:30.341802+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:31.342006+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:32.342203+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:33.342347+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 14065664 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:34.342481+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 14065664 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:35.342623+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 14065664 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:36.342777+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 14065664 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:37.342942+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 14065664 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:38.343133+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 14065664 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:39.343319+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 14065664 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:40.343472+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 14065664 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:41.343636+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:42.343811+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:43.343995+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:44.344117+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:45.344230+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:46.344349+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:47.344503+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:48.344721+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:49.344857+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:50.345025+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:51.345220+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:52.345371+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137510912 unmapped: 14049280 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:53.345619+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137510912 unmapped: 14049280 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:54.345761+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137510912 unmapped: 14049280 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:55.345889+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137510912 unmapped: 14049280 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:56.346022+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137510912 unmapped: 14049280 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:57.346146+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 14188544 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:58.346259+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 14188544 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:59.346398+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 14188544 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:00.346517+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 14188544 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:01.346719+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 14188544 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:02.346831+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 14188544 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:03.347008+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 14188544 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:04.347177+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 14188544 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:05.347257+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 14188544 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:06.347391+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 14188544 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:07.347533+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 14188544 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:08.347671+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 14188544 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:09.347800+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 14188544 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:10.347922+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 14188544 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:11.348071+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 14188544 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:12.348203+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 14188544 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:13.348364+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 14180352 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:14.348496+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 14180352 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:15.348659+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 14172160 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:16.348792+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 14172160 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:17.348984+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 14172160 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:18.349095+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 14172160 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:19.349239+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 14172160 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:20.349394+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 14172160 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:21.349554+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 14172160 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:22.349712+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 14172160 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:23.349874+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 14172160 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:24.349996+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137396224 unmapped: 14163968 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:25.350171+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137396224 unmapped: 14163968 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:26.350381+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137396224 unmapped: 14163968 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:27.350617+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137396224 unmapped: 14163968 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:28.350791+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137396224 unmapped: 14163968 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:29.350919+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137396224 unmapped: 14163968 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:30.351134+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137396224 unmapped: 14163968 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:31.351422+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137396224 unmapped: 14163968 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:32.351596+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137396224 unmapped: 14163968 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:33.351785+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137396224 unmapped: 14163968 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:34.351931+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137396224 unmapped: 14163968 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:35.352101+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137396224 unmapped: 14163968 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:36.352198+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137396224 unmapped: 14163968 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:37.352332+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137396224 unmapped: 14163968 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:38.352468+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 14155776 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:39.352608+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 14155776 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:40.352773+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 14155776 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:41.352951+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 14155776 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:42.353099+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 14155776 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:43.353233+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 14155776 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:44.353339+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 14155776 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:45.353488+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 14155776 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:46.353618+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 14155776 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:47.353742+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 14155776 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:48.353874+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 14155776 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:49.354023+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 14155776 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:50.354271+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 14155776 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:51.354467+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 14155776 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:52.354610+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 14155776 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:53.354769+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 14155776 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:54.354893+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 14155776 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:55.355068+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:56.355201+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:57.355327+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:58.355448+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:59.355661+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:00.355802+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:01.355974+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:02.356108+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:03.356234+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:04.356484+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:05.356648+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:06.356783+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:07.356921+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:08.357045+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:09.357174+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:10.357298+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:11.357479+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:12.357618+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:13.357802+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:14.357951+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:15.358161+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:16.358305+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 14147584 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:17.358470+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:18.358649+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:19.358844+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 14139392 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:20.359063+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:21.359775+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:22.359996+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:23.360117+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:24.360305+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:25.360508+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:26.360832+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:27.361010+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:28.361397+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:29.361755+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:30.361948+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:31.362300+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:32.362433+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:33.362734+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:34.362933+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:35.363082+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:36.363281+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:37.363435+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:38.363797+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 14131200 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:39.363968+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:40.364049+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:41.364276+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:42.364444+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:43.364650+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:44.364788+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:45.364919+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:46.365060+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:47.365204+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:48.365336+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:49.365529+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:50.365693+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:51.365918+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:52.366033+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:53.366196+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:54.366374+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:55.366498+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 14123008 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 14K writes, 55K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 4560 syncs, 3.24 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3386 writes, 10K keys, 3386 commit groups, 1.0 writes per commit group, ingest: 14.83 MB, 0.02 MB/s
                                           Interval WAL: 3386 writes, 1322 syncs, 2.56 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:56.366625+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:57.366753+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:58.366878+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:59.367060+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:00.367177+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:01.367614+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:02.367735+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:03.367839+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:04.367885+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:05.368014+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:06.368133+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:07.368313+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 14114816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:08.368540+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 14106624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:09.368771+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 14098432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:10.368899+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 14098432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:11.369048+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 14098432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:12.369158+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 14098432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:13.369276+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 14098432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:14.369396+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 14098432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:15.369516+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137469952 unmapped: 14090240 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:16.369624+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137469952 unmapped: 14090240 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:17.369786+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137469952 unmapped: 14090240 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:18.369929+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137469952 unmapped: 14090240 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:19.370033+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137478144 unmapped: 14082048 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:20.370205+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137478144 unmapped: 14082048 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:21.370377+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137478144 unmapped: 14082048 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:22.370491+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137478144 unmapped: 14082048 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:23.370646+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137478144 unmapped: 14082048 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:24.370784+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137478144 unmapped: 14082048 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:25.370859+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137478144 unmapped: 14082048 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:26.370942+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137478144 unmapped: 14082048 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:27.371097+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137478144 unmapped: 14082048 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:28.371287+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:29.371424+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:30.371588+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:31.371778+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:32.372058+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:33.372278+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:34.372469+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:35.372612+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:36.372745+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:37.372950+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:38.373065+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:39.373167+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:40.373343+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:41.373497+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:42.373649+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:43.373755+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:44.373932+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:45.374111+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:46.374270+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:47.374424+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:48.374530+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137486336 unmapped: 14073856 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:49.374714+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 14065664 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:50.374906+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 14065664 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:51.375087+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 14065664 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:52.375325+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:53.375609+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:54.375810+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:55.375998+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:56.376122+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:57.376272+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:58.376458+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:59.376645+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 14057472 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:00.376835+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553341 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137510912 unmapped: 14049280 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:01.377008+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 14041088 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:02.377210+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 14041088 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:03.377372+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 14041088 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:04.377542+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 334.488464355s of 334.516693115s, submitted: 205
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137527296 unmapped: 14032896 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:05.377719+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137576448 unmapped: 13983744 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:06.377870+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 13959168 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:07.378025+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 13959168 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:08.378163+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 13959168 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:09.378288+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 13959168 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:10.378473+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 13959168 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:11.378681+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 13959168 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:12.378855+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 13959168 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:13.379023+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 13959168 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:14.379235+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 13959168 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:15.379394+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 13959168 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:16.379602+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 13959168 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:17.379802+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 13959168 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:18.380169+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 13959168 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:19.380355+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 13959168 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:20.380474+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 13959168 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:21.380667+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 13959168 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:22.380812+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 13959168 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:23.380984+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 13959168 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:24.381107+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 13942784 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:25.381223+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:26.381387+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 13942784 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:27.381619+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 13942784 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:28.381757+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 13942784 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:29.381941+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 13942784 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:30.382115+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 13942784 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:31.382301+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 13942784 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:32.382490+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 13942784 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:33.382639+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 13942784 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:34.382769+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 13942784 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:35.382941+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 13942784 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:36.383143+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 13942784 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:37.383301+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 13942784 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:38.383489+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 13942784 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:39.383629+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 13942784 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:40.383753+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 13942784 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:41.383967+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137625600 unmapped: 13934592 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:42.384157+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137625600 unmapped: 13934592 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:43.384358+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137625600 unmapped: 13934592 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:44.384487+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137625600 unmapped: 13934592 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:45.384670+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137625600 unmapped: 13934592 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:46.384832+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137625600 unmapped: 13934592 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:47.384979+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137625600 unmapped: 13934592 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:48.385113+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137625600 unmapped: 13934592 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:49.385263+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137625600 unmapped: 13934592 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:50.385508+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:51.385732+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:52.386975+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:53.387251+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:54.387489+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:55.387703+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:56.387828+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:57.388046+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:58.388172+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:59.388325+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:00.388464+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:01.388752+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:02.389506+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:03.390160+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:04.390295+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:05.390879+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:06.391034+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:07.391336+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:08.391493+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:09.391762+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:10.391934+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:11.392095+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:12.392544+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 13926400 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:13.392711+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 13918208 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:14.392867+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 13918208 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:15.393015+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 13918208 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:16.393217+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 13918208 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:17.393340+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 13918208 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:18.393517+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 13918208 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:19.393714+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 13918208 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:20.393939+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 13918208 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:21.394210+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 13918208 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:22.394356+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 13918208 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:23.394521+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 13918208 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:24.394705+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137650176 unmapped: 13910016 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:25.394828+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 13901824 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:26.395044+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 13901824 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:27.395196+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 13901824 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:28.395379+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 13901824 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:29.395551+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 13901824 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:30.395735+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 13901824 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:31.395920+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 13901824 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:32.396065+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 13901824 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:33.396459+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 13901824 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:34.396618+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 13901824 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:35.396750+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 13901824 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:36.396862+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 13901824 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:37.396997+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 13901824 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:38.397125+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 13901824 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:39.397265+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 13901824 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:40.397390+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 13893632 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:41.397594+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 13893632 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:42.397706+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 13893632 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:43.397806+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 13893632 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:44.397958+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 13893632 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:45.398093+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 13893632 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:46.398245+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 13893632 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:47.398392+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 13893632 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:48.398519+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 13893632 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:49.398686+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 13893632 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:50.398845+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 13893632 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:51.399060+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 13893632 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:52.399209+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 13893632 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:53.399373+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 13893632 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:54.399628+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 13893632 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:55.399774+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 13885440 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:56.399928+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137682944 unmapped: 13877248 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:57.400077+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137682944 unmapped: 13877248 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:58.400201+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137691136 unmapped: 13869056 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:59.400343+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137691136 unmapped: 13869056 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:00.400524+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137691136 unmapped: 13869056 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:01.400723+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137691136 unmapped: 13869056 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:02.400850+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137691136 unmapped: 13869056 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:03.401038+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137691136 unmapped: 13869056 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:04.401172+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137691136 unmapped: 13869056 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:05.401305+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137691136 unmapped: 13869056 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:06.401465+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137691136 unmapped: 13869056 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:07.401671+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137691136 unmapped: 13869056 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:08.401876+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137691136 unmapped: 13869056 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:09.402037+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137691136 unmapped: 13869056 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:10.402237+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137691136 unmapped: 13869056 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:11.402438+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 13860864 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:12.402624+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 13860864 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:13.402811+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 13860864 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:14.402944+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 13860864 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:15.403161+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 13860864 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:16.403385+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 13860864 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:17.403583+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 13860864 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:18.403714+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 13860864 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:19.403846+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 13860864 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:20.404185+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 13860864 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:21.404406+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 13860864 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:22.404605+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 13860864 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:23.404790+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 13860864 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:24.404953+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 13860864 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:25.405095+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 13852672 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:26.405228+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 13852672 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:27.405373+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 13852672 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:28.405500+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 13852672 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:29.405629+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 13852672 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:30.405775+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137715712 unmapped: 13844480 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:31.406050+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 13836288 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:32.406247+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 13836288 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:33.406391+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 13836288 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:34.406524+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 13836288 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:35.406662+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 13836288 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:36.406853+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 13836288 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:37.407033+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 13836288 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:38.407175+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 13836288 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:39.407362+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 13836288 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:40.407628+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 13828096 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:41.407850+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:42.408092+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:43.408248+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:44.408454+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:45.408599+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:46.408755+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:47.408932+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:48.409063+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:49.409200+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:50.409355+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:51.409600+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:52.409854+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:53.410149+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:54.410335+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:55.410492+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:56.410650+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:57.410807+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:58.410945+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:59.411076+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:00.411210+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:01.411414+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:02.411641+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:03.411785+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:04.411967+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 13811712 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:05.412159+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 13811712 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:06.412295+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 13811712 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:07.412425+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 13811712 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:08.412592+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 13811712 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:09.412726+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 13803520 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:10.412913+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 13803520 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:11.413149+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 13803520 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:12.413380+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 13803520 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:13.413633+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 13803520 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:14.413777+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 13803520 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:15.413907+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 13836288 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:16.414092+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 13836288 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:17.414228+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 13836288 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:18.414362+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 13836288 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:19.414643+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 13836288 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:20.414793+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 13828096 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:21.414952+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 13828096 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:22.415075+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 13828096 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:23.415196+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 13828096 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:24.415385+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 13828096 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:25.415541+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 13828096 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:26.415706+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 13828096 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:27.415852+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 13828096 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:28.416032+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 13828096 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:29.416202+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 13828096 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:30.416376+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 13828096 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:31.416610+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 13828096 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:32.416826+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 13828096 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:33.417899+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 13828096 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:34.418123+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 13828096 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:35.418317+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 13828096 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:36.418515+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:37.418659+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:38.418854+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:39.418988+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:40.419174+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:41.419387+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:42.419541+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:43.419731+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:44.419887+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:45.420025+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:46.420184+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:47.420361+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:48.420536+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:49.421007+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:50.421284+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 13819904 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:51.421464+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 13811712 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:52.421670+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 13811712 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:53.421829+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 13811712 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:54.421948+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 13811712 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:55.422072+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 13811712 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets getting new tickets!
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:56.422260+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _finish_auth 0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:56.423069+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:57.422379+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 13811712 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:58.422510+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 13811712 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:59.422628+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 13811712 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:00.422750+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 13811712 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:01.422941+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 13811712 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:02.423105+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 13811712 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 ms_handle_reset con 0x55944adab800 session 0x55944a4fc380
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944d948400
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:03.423262+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 13811712 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc ms_handle_reset ms_handle_reset con 0x55944d8fac00
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4161918394
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: get_auth_request con 0x55944d8f4c00 auth_method 0
Jan 23 19:20:38 compute-0 ceph-osd[87009]: mgrc handle_mgr_configure stats_period=5
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:04.423423+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 13680640 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 ms_handle_reset con 0x55944d17b400 session 0x55944d3fca80
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944b2cb000
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 ms_handle_reset con 0x55944adaa000 session 0x55944b311340
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: handle_auth_request added challenge on 0x55944adaa800
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:05.423552+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 13549568 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'config diff' '{prefix=config diff}'
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'config show' '{prefix=config show}'
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'counter dump' '{prefix=counter dump}'
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'counter schema' '{prefix=counter schema}'
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:06.423689+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137945088 unmapped: 13615104 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:38 compute-0 ceph-osd[87009]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:38 compute-0 ceph-osd[87009]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553005 data_alloc: 218103808 data_used: 13418
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: tick
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_tickets
Jan 23 19:20:38 compute-0 ceph-osd[87009]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:07.423829+0000)
Jan 23 19:20:38 compute-0 ceph-osd[87009]: prioritycache tune_memory target: 4294967296 mapped: 137838592 unmapped: 13721600 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:38 compute-0 ceph-osd[87009]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2d7692b/0x2e74000, compress 0x0/0x0/0x0, omap 0x3be03, meta 0x60741fd), peers [0,2] op hist [])
Jan 23 19:20:38 compute-0 ceph-osd[87009]: do_command 'log dump' '{prefix=log dump}'
Jan 23 19:20:38 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14906 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:38 compute-0 ceph-mgr[75390]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 19:20:38 compute-0 ceph-4da2adac-d096-5ead-9ec5-3108259ba9e4-mgr-compute-0-rszfwe[75386]: 2026-01-23T19:20:38.915+0000 7f06d8601640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 19:20:39 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:39 compute-0 rsyslogd[1007]: imjournal from <np0005593912:ceph-osd>: begin to drop messages due to rate-limiting
Jan 23 19:20:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 23 19:20:39 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/262808791' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 23 19:20:39 compute-0 ceph-mon[75097]: from='client.14900 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:39 compute-0 ceph-mon[75097]: from='client.14904 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:39 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2371314364' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 23 19:20:39 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 23 19:20:39 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/611499245' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 23 19:20:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 23 19:20:40 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1971245673' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 23 19:20:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 23 19:20:40 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4238826128' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 23 19:20:40 compute-0 ceph-mon[75097]: from='client.14906 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:40 compute-0 ceph-mon[75097]: pgmap v1540: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:40 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/262808791' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 23 19:20:40 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/611499245' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 23 19:20:40 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1971245673' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 23 19:20:40 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/4238826128' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 23 19:20:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 23 19:20:40 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2106009017' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 19:20:40 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 23 19:20:40 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3048714291' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659032622064907 of space, bias 1.0, pg target 0.1997709786619472 quantized to 32 (current 32)
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005188356691283345 of space, bias 4.0, pg target 0.6226028029540014 quantized to 16 (current 16)
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 19:20:40 compute-0 ceph-mgr[75390]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 19:20:41 compute-0 podman[269333]: 2026-01-23 19:20:41.000640278 +0000 UTC m=+0.086686256 container health_status 064b572739d57e03976cf71029b9a6083eeff1cd8bd3e1e1bb498e112c0a2eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '98d91d10850fb9f4ca8d1947afbbabeb0e41643da34e43578083b083116210cb-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-eadbcbebeeb6b2acc4c2607b8c54b2c87b6de5c3335cdd70f0b86f4ac1288951-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 19:20:41 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 23 19:20:41 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/70974730' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 23 19:20:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 23 19:20:41 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2125854921' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 23 19:20:41 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2106009017' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 23 19:20:41 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3048714291' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 23 19:20:41 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/70974730' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 23 19:20:41 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2125854921' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 23 19:20:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 23 19:20:41 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2152121634' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 23 19:20:41 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 23 19:20:41 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/259786701' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 23 19:20:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 23 19:20:42 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3317577327' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 23 19:20:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 23 19:20:42 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/434825273' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 23 19:20:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 23 19:20:42 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/638836234' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 23 19:20:42 compute-0 ceph-mon[75097]: pgmap v1541: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:42 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2152121634' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 23 19:20:42 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/259786701' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 23 19:20:42 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3317577327' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 23 19:20:42 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/434825273' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 23 19:20:42 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/638836234' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 23 19:20:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 23 19:20:42 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2116450636' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 23 19:20:42 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:20:43 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:14.290591+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1024000 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:15.290793+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:16.290908+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:17.291051+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:18.291273+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:19.291419+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:20.291630+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:21.291784+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:22.292026+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:23.292180+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:24.292330+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:25.292541+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:26.292705+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:27.292854+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:28.292989+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:29.294436+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:30.295295+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:31.295758+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:32.296713+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:33.297408+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:34.297764+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:35.297976+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:36.298126+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:37.298259+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:38.298398+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:39.298547+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 958464 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:40.298709+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:41.298959+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:42.299083+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:43.299236+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:44.299774+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:45.299946+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:46.300096+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:47.300278+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:48.300417+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:49.300547+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:50.300703+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:51.300874+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:52.301027+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:53.301157+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 942080 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:54.301300+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 925696 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:55.301458+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 925696 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:56.301600+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 925696 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:57.301729+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 925696 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:58.301866+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 925696 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:47:59.302011+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 909312 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:00.302152+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 909312 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:01.302267+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:02.302404+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:03.302615+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:04.303018+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:05.303314+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:06.303479+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:07.303634+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:08.303754+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:09.303889+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:10.304052+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:11.304219+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:12.304428+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:13.304600+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:14.304784+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 901120 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:15.305056+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 884736 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:16.305217+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 884736 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:17.305438+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 884736 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:18.305603+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 884736 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:19.305754+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 868352 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:20.305872+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:21.306016+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:22.306170+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:23.306302+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:24.306456+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:25.306689+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:26.306866+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:27.307064+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:28.307266+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:29.307481+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 860160 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:30.307638+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 851968 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:31.307812+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 851968 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:32.307964+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 851968 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:33.308127+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 851968 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:34.308307+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 851968 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:35.308945+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 851968 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:36.309095+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 851968 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:37.309236+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 843776 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:38.309420+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 843776 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:39.309598+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:40.309766+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:41.309919+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:42.310046+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:43.310197+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:44.310360+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:45.310512+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:46.310674+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:47.310806+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:48.310962+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:49.311092+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:50.311245+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:51.311377+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:52.311578+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:53.311703+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 827392 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:54.311811+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 819200 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:55.311990+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 819200 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:56.312112+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 819200 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:57.312248+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 819200 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:58.312437+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 819200 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:48:59.312639+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:00.312798+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:01.312911+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:02.313054+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:03.313245+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:04.313395+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:05.313624+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:06.313756+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:07.313890+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:08.314027+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:09.314183+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:10.314312+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:11.314537+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:12.314789+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:13.314916+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 811008 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:14.315045+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 794624 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:15.315224+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 794624 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:16.315357+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 794624 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:17.315490+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 794624 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:18.315644+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 794624 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:19.315798+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 778240 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:20.315950+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 778240 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:21.316071+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 778240 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:22.316221+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 778240 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:23.316427+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 778240 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:24.316631+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 770048 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:25.316823+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 770048 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:26.316953+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 770048 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:27.317068+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 770048 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:28.317185+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:29.317317+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:30.317465+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:31.317653+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:32.317823+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:33.317970+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:34.318160+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:35.318371+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:36.318525+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:37.318660+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:38.318773+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 761856 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:39.319628+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 745472 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:40.319794+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 745472 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:41.320006+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 745472 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:42.320652+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 745472 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:43.320767+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 745472 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:44.320959+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:45.321292+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:46.321395+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:47.321603+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:48.321740+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:49.321907+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:50.322134+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:51.322285+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:52.322525+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:53.322664+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:54.322875+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:55.323066+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:56.323198+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:57.323398+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:58.323543+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:49:59.323774+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 737280 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc ms_handle_reset ms_handle_reset con 0x558773490000
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4161918394
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: get_auth_request con 0x558773307c00 auth_method 0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_configure stats_period=5
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:00.323975+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:01.324128+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:02.324355+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:03.324707+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:04.324948+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 ms_handle_reset con 0x558772a55400 session 0x55877180d180
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: handle_auth_request added challenge on 0x558774ceb400
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:05.325117+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:06.325263+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:07.325434+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:08.325574+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:09.325690+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:10.325826+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:11.325965+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:12.326097+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:13.326385+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:14.326583+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:15.326708+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:16.326882+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:17.327017+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:18.327183+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:19.327324+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:20.327774+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:21.327905+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:22.328061+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:23.328208+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:24.328361+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 270336 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:25.328577+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:26.328737+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:27.328922+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:28.329077+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:29.329228+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:30.329341+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:31.329486+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:32.329626+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:33.329754+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:34.329892+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 237568 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:35.330054+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:36.330176+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:37.330302+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:38.330467+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:39.330609+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:40.330770+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:41.330899+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:42.331039+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:43.331204+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:44.331920+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 229376 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:45.332064+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 212992 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:46.332694+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 212992 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:47.333112+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 212992 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:48.333523+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 212992 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:49.333729+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 212992 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:50.334000+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:51.334307+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:52.334489+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:53.334653+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:54.334900+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:55.335129+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:56.335198+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:57.335342+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:58.335483+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:50:59.335647+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977608 data_alloc: 218103808 data_used: 3973
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:00.335825+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:01.335977+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:02.336175+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 204800 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:03.336383+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 300.046569824s of 300.317047119s, submitted: 106
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: handle_auth_request added challenge on 0x558774ceb800
Jan 23 19:20:43 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:04.336541+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1007616 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:05.336753+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:06.336886+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:07.337017+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:08.337236+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:09.337470+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:10.337647+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:11.337825+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:12.338090+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:13.338258+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:14.338402+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:15.338583+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:16.338712+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:17.338829+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:18.338991+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:19.339202+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:20.339548+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:21.339708+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:22.339874+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:23.340021+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:24.340188+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:25.340361+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:26.342622+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:27.342740+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:28.342897+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:29.343049+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:30.343199+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:31.343305+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:32.343428+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:33.343662+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:34.343792+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:35.344022+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:36.344186+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:37.344391+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:38.344522+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:39.344733+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:40.344868+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:41.344995+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:42.345123+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:43.345311+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:44.345496+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:45.345668+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:46.345814+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:47.345931+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:48.346076+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:49.346292+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:50.346456+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:51.346662+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:52.346860+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:53.347015+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:54.347313+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:55.347606+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:56.347733+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:57.347962+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:58.348154+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:51:59.348350+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:00.348501+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:01.348626+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:02.348835+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:03.349031+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:04.349205+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:05.349451+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:06.349614+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:07.349746+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:08.349915+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:09.350037+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:10.350176+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:11.350311+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:12.350471+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:13.350663+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:14.350836+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:15.351076+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:16.351196+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:17.351337+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:18.351481+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:19.351622+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:20.351798+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:21.351953+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:22.352434+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:23.352692+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:24.353115+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1269760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:25.353351+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:26.353642+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:27.353966+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:28.354226+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:29.354548+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:30.354885+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:31.355251+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:32.355440+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:33.355679+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:34.355882+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1261568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:35.356122+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:36.356303+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:37.356445+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:38.356760+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:39.356967+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:40.357152+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:41.357381+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:42.357570+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:43.357722+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:44.357905+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:45.358072+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:46.358248+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:47.358427+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:48.358651+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:49.358775+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:50.358882+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:51.359022+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:52.359135+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:53.359252+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:54.359377+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:55.359567+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:56.359674+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:57.359801+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:58.359908+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:52:59.360067+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:00.360192+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:01.360498+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:02.360741+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:03.361016+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:04.361180+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:05.361387+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:06.361583+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:07.361680+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:08.361847+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:09.361999+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:10.362114+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:11.362216+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:12.362372+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:13.362528+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:14.362729+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:15.362925+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:16.363049+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:17.363199+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:18.363349+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:19.363533+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:20.363699+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:21.363866+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:22.363998+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:23.364125+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:24.364300+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:25.364488+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:26.364630+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:27.364758+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:28.364920+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:29.365112+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:30.365261+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:31.365433+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:32.365606+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:33.365744+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1253376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:34.365876+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:35.366037+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:36.366206+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:37.366368+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:38.366523+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:39.366656+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:40.366813+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:41.366967+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:42.367154+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:43.367300+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:44.367450+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:45.367693+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:46.367842+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:47.367966+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:48.368091+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:49.368257+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:50.368421+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:51.368658+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:52.368843+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:53.369053+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1236992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:54.369189+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:55.369407+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:56.369589+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:57.369710+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:58.369859+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:53:59.370059+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:00.370192+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:01.370328+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:02.370472+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:03.370633+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:04.370873+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:05.371107+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:06.371260+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:07.371406+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:08.371633+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:09.371811+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:10.371968+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:11.372126+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:12.372321+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:13.372471+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread fragmentation_score=0.000117 took=0.000012s
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:14.372740+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:15.373014+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:16.373217+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:17.373347+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:18.373520+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:19.373811+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:20.374086+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:21.374223+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:22.374382+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:23.374540+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:24.384482+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:25.384725+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:26.385052+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:27.385286+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:28.385497+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:29.386934+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:30.388045+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:31.388931+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:32.389609+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:33.390057+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:34.390245+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:35.390607+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:36.390850+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:37.391099+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:38.391281+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:39.391645+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:40.391981+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:41.392203+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:42.392442+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:43.392635+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:44.392783+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:45.393090+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:46.393518+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:47.393914+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:48.394073+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:49.394229+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:50.394469+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1212416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5791 writes, 24K keys, 5791 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5791 writes, 971 syncs, 5.96 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5587717e7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:51.394623+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1179648 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:52.394806+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1179648 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:53.394948+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1179648 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:54.395077+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1179648 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:55.395266+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1171456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:56.395431+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1171456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:57.395617+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1171456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:58.395775+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1171456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:54:59.395900+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1171456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:00.396065+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1171456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:01.396198+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1163264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:02.396411+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1163264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:03.396597+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1163264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:04.396752+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1163264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:05.396940+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1163264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:06.397181+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:07.397412+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:08.397533+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:09.397740+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:10.397898+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:11.398043+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:12.398229+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:13.398405+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:14.398600+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:15.398784+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:16.398930+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:17.399104+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:18.399290+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:19.399502+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:20.399734+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:21.399884+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:22.400088+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:23.400352+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:24.400708+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:25.400997+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:26.401267+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:27.401459+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:28.402437+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:29.402663+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:30.402910+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:31.403061+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:32.403259+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:33.403480+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:34.403709+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:35.403905+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:36.404173+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:37.404392+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:38.404553+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:39.404760+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:40.404935+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:41.405059+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:42.405215+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:43.405444+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:44.405657+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:45.405913+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:46.406130+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:47.406308+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:48.406531+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:49.406787+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:50.406914+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:51.407068+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:52.407195+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:53.407355+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:54.407510+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:55.407730+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:56.407946+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:57.408142+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:58.408354+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:55:59.408524+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:00.408740+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:01.408920+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:02.409148+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:03.409292+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 300.099670410s of 300.125030518s, submitted: 18
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 1089536 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:04.409422+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:05.409617+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 1974272 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:06.409753+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:07.409901+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:08.410061+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:09.410240+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:10.410454+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:11.410698+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:12.410899+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:13.411075+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:14.411243+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:15.411513+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:16.411673+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:17.411839+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:18.411985+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:19.412209+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:20.412668+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:21.412815+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:22.412976+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:23.413162+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 1966080 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:24.413747+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:25.413922+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:26.414052+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:27.414197+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:28.414324+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:29.414604+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:30.414877+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:31.415054+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:32.415237+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:33.415436+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:34.415586+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:35.415801+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:36.415931+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:37.416130+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:38.416275+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:39.416454+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:40.416695+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:41.416861+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:42.416981+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:43.417108+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:44.417258+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:45.417452+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:46.417612+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:47.417795+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:48.417897+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:49.418027+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:50.418163+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:51.418319+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:52.418437+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:53.418608+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:54.418761+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:55.418902+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:56.419086+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:57.419226+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:58.419416+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:56:59.419574+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:00.419740+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:01.419925+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:02.420121+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:03.420303+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:04.420440+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:05.420637+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:06.420804+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:07.420942+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:08.421120+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:09.421278+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:10.421377+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:11.421528+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:12.421635+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:13.421766+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:14.421867+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:15.422044+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:16.422188+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:17.422317+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:18.422443+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:19.422627+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:20.422802+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:21.422936+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268649211' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:22.423047+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:23.423160+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:24.423311+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:25.423459+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:26.423651+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:27.423829+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:28.423992+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:29.424150+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:30.424255+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:31.424497+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:32.424646+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:33.424853+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 1957888 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:34.424969+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:35.425132+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:36.425309+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:37.425440+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:38.425566+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:39.425766+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:40.425892+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:41.426008+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:42.426221+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:43.426367+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:44.426519+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:45.426925+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:46.427097+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:47.427231+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:48.427401+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:49.427607+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:50.427770+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:51.427939+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:52.432549+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:53.432756+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:54.432879+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:55.433177+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:56.433342+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:57.433463+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:58.433641+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:57:59.433748+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:00.433931+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:01.434031+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:02.434212+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:03.434383+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:04.434510+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:05.434769+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:06.434959+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:07.435094+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:08.435231+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:09.435407+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:10.435531+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:11.435695+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:12.435848+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:13.435993+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:14.436437+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:15.436762+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:16.437078+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:17.437296+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:18.437519+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:19.437691+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:20.437969+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:21.438113+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:22.438357+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:23.438539+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:24.438804+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:25.439216+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:26.439431+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:27.439675+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:28.439848+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:29.440080+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:30.440269+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:31.440477+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:32.440723+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:33.440941+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:34.441116+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:35.441290+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:36.441494+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:37.441682+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:38.441883+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:39.442063+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:40.442263+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:41.442435+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:42.442683+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:43.442942+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:44.443154+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:45.443384+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:46.443651+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:47.443893+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:48.444120+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:49.444306+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:50.444500+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:51.444717+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:52.444934+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:53.445284+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:54.445491+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:55.445766+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:56.445953+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977992 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:57.446125+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb2e32/0x16c000, compress 0x0/0x0/0x0, omap 0x15150, meta 0x2bbaeb0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:58.446299+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 1949696 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:58:59.446636+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 118 handle_osd_map epochs [118,119], i have 119, src has [1,119]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 175.431274414s of 176.003372192s, submitted: 106
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 1941504 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:00.446914+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 1941504 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb6000/0x0/0x4ffc00000, data 0xb65be/0x172000, compress 0x0/0x0/0x0, omap 0x155f8, meta 0x2bbaa08), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:01.447122+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 1933312 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984724 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:02.447343+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 1933312 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: handle_auth_request added challenge on 0x55877572c400
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 121 ms_handle_reset con 0x55877572c400 session 0x55877594a8c0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 121 heartbeat osd_stat(store_statfs(0x4fceb6000/0x0/0x4ffc00000, data 0xb65be/0x172000, compress 0x0/0x0/0x0, omap 0x155f8, meta 0x2bbaa08), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:03.447639+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 1753088 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: handle_auth_request added challenge on 0x5587760a4000
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:04.447839+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 10878976 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 121 ms_handle_reset con 0x5587760a4000 session 0x5587731b3180
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:05.448115+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:06.448343+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012426 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:07.448544+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fca41000/0x0/0x4ffc00000, data 0x529d51/0x5e9000, compress 0x0/0x0/0x0, omap 0x15b42, meta 0x2bba4be), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:08.448798+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:09.449046+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:10.449271+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.254834175s of 11.334155083s, submitted: 17
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:11.449512+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017830 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:12.449681+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:13.449848+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fca3e000/0x0/0x4ffc00000, data 0x52b7d0/0x5ec000, compress 0x0/0x0/0x0, omap 0x15e3e, meta 0x2bba1c2), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:14.450050+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:15.450287+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:16.450491+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017830 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:17.450710+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:18.450889+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fca3e000/0x0/0x4ffc00000, data 0x52b7d0/0x5ec000, compress 0x0/0x0/0x0, omap 0x15e3e, meta 0x2bba1c2), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:19.451056+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:20.451275+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:21.451499+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017830 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:22.451710+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fca3e000/0x0/0x4ffc00000, data 0x52b7d0/0x5ec000, compress 0x0/0x0/0x0, omap 0x15e3e, meta 0x2bba1c2), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:23.451876+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fca3e000/0x0/0x4ffc00000, data 0x52b7d0/0x5ec000, compress 0x0/0x0/0x0, omap 0x15e3e, meta 0x2bba1c2), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:24.452087+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:25.452472+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 10887168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Got map version 10
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fca3e000/0x0/0x4ffc00000, data 0x52b7d0/0x5ec000, compress 0x0/0x0/0x0, omap 0x15e3e, meta 0x2bba1c2), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:26.452674+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 10829824 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017830 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:27.452893+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 10829824 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:28.453057+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 10829824 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:29.453276+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 10829824 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:30.453424+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 10829824 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fca3e000/0x0/0x4ffc00000, data 0x52b7d0/0x5ec000, compress 0x0/0x0/0x0, omap 0x15e3e, meta 0x2bba1c2), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:31.453621+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 10829824 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017830 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:32.453810+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 10829824 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:33.453979+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 10829824 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Got map version 11
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:34.454175+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 10903552 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fca3e000/0x0/0x4ffc00000, data 0x52b7d0/0x5ec000, compress 0x0/0x0/0x0, omap 0x15e3e, meta 0x2bba1c2), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:35.454520+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.813196182s of 24.820951462s, submitted: 9
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 10903552 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:36.454718+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 10862592 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1020764 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:37.454870+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 10854400 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:38.455035+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 10854400 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:39.455202+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 10854400 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:40.455344+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fca3b000/0x0/0x4ffc00000, data 0x52d3d5/0x5ef000, compress 0x0/0x0/0x0, omap 0x160e8, meta 0x2bb9f18), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:41.455473+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fca38000/0x0/0x4ffc00000, data 0x52ee54/0x5f2000, compress 0x0/0x0/0x0, omap 0x16345, meta 0x2bb9cbb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023378 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:42.455618+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:43.455770+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:44.456006+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fca38000/0x0/0x4ffc00000, data 0x52ee54/0x5f2000, compress 0x0/0x0/0x0, omap 0x16345, meta 0x2bb9cbb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:45.456200+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:46.456358+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023522 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:47.456485+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:48.456649+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:49.456840+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.629572868s of 14.033327103s, submitted: 35
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:50.456967+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 10813440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fca3a000/0x0/0x4ffc00000, data 0x52ee54/0x5f2000, compress 0x0/0x0/0x0, omap 0x16345, meta 0x2bb9cbb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:51.457102+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fca3a000/0x0/0x4ffc00000, data 0x52ee54/0x5f2000, compress 0x0/0x0/0x0, omap 0x16345, meta 0x2bb9cbb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022818 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:52.457260+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:53.457418+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:54.457726+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fca3a000/0x0/0x4ffc00000, data 0x52ee54/0x5f2000, compress 0x0/0x0/0x0, omap 0x16345, meta 0x2bb9cbb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:55.457972+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:56.458153+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022818 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:57.458354+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:58.458540+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T18:59:59.458772+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fca3a000/0x0/0x4ffc00000, data 0x52ee54/0x5f2000, compress 0x0/0x0/0x0, omap 0x16345, meta 0x2bb9cbb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:00.459004+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:01.459222+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022818 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:02.459393+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:03.459595+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.991110802s of 13.999632835s, submitted: 4
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fca3a000/0x0/0x4ffc00000, data 0x52ee54/0x5f2000, compress 0x0/0x0/0x0, omap 0x16345, meta 0x2bb9cbb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 10780672 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:04.459824+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:05.460067+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:06.460273+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026312 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:07.460437+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 10764288 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:08.460764+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 10764288 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fca35000/0x0/0x4ffc00000, data 0x530a59/0x5f5000, compress 0x0/0x0/0x0, omap 0x165f2, meta 0x2bb9a0e), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:09.460981+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 10764288 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:10.461123+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 10764288 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 126 handle_osd_map epochs [126,127], i have 127, src has [1,127]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x530a59/0x5f5000, compress 0x0/0x0/0x0, omap 0x165f2, meta 0x2bb9a0e), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:11.461325+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 10739712 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028926 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:12.461626+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 10739712 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:13.461824+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 10739712 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fca32000/0x0/0x4ffc00000, data 0x5324d8/0x5f8000, compress 0x0/0x0/0x0, omap 0x16920, meta 0x2bb96e0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:14.461972+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fca32000/0x0/0x4ffc00000, data 0x5324d8/0x5f8000, compress 0x0/0x0/0x0, omap 0x16920, meta 0x2bb96e0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 10739712 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:15.462204+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 10739712 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:16.462357+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 10739712 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028926 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:17.462618+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 10739712 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:18.462831+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 10739712 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:19.463189+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:20.463391+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fca32000/0x0/0x4ffc00000, data 0x5324d8/0x5f8000, compress 0x0/0x0/0x0, omap 0x16920, meta 0x2bb96e0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:21.463642+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028926 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fca32000/0x0/0x4ffc00000, data 0x5324d8/0x5f8000, compress 0x0/0x0/0x0, omap 0x16920, meta 0x2bb96e0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:22.463895+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fca32000/0x0/0x4ffc00000, data 0x5324d8/0x5f8000, compress 0x0/0x0/0x0, omap 0x16920, meta 0x2bb96e0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:23.464037+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:24.465435+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:25.466355+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fca32000/0x0/0x4ffc00000, data 0x5324d8/0x5f8000, compress 0x0/0x0/0x0, omap 0x16920, meta 0x2bb96e0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:26.467290+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028926 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:27.467683+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fca32000/0x0/0x4ffc00000, data 0x5324d8/0x5f8000, compress 0x0/0x0/0x0, omap 0x16920, meta 0x2bb96e0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:28.467997+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 10731520 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:29.468168+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: handle_auth_request added challenge on 0x5587760a4400
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.003675461s of 25.996925354s, submitted: 34
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 10567680 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:30.468406+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 10567680 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:31.468540+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 10559488 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030058 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:32.468729+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 10559488 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:33.469071+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fca33000/0x0/0x4ffc00000, data 0x532573/0x5f9000, compress 0x0/0x0/0x0, omap 0x16920, meta 0x2bb96e0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 10559488 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:34.469234+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 10559488 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:35.469415+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 10559488 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:36.469573+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 10551296 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1031844 data_alloc: 218103808 data_used: 4433
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:37.469706+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fca2f000/0x0/0x4ffc00000, data 0x53413d/0x5fb000, compress 0x0/0x0/0x0, omap 0x16bd1, meta 0x2bb942f), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 128 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fca2f000/0x0/0x4ffc00000, data 0x53413d/0x5fb000, compress 0x0/0x0/0x0, omap 0x16bd1, meta 0x2bb942f), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 9486336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:38.469884+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 9469952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:39.470050+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fca29000/0x0/0x4ffc00000, data 0x535dfd/0x5ff000, compress 0x0/0x0/0x0, omap 0x16ddf, meta 0x2bb9221), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.783244133s of 10.030097008s, submitted: 63
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 9469952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:40.470192+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 9469952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:41.470332+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 9469952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035734 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:42.470491+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 9469952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:43.470622+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 9461760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:44.470859+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 9527296 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:45.471022+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fca2d000/0x0/0x4ffc00000, data 0x535dfd/0x5ff000, compress 0x0/0x0/0x0, omap 0x16ddf, meta 0x2bb9221), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 9527296 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fca2d000/0x0/0x4ffc00000, data 0x535dfd/0x5ff000, compress 0x0/0x0/0x0, omap 0x16ddf, meta 0x2bb9221), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:46.471161+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 9461760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042452 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:47.471294+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 9412608 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:48.471502+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79511552 unmapped: 9404416 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:49.471660+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79511552 unmapped: 9404416 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fca25000/0x0/0x4ffc00000, data 0x5396ef/0x605000, compress 0x0/0x0/0x0, omap 0x17424, meta 0x2bb8bdc), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.577225685s of 10.690270424s, submitted: 59
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:50.471821+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79511552 unmapped: 9404416 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:51.472166+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 9396224 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050872 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:52.472315+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 9396224 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:53.472488+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53cfc1/0x60b000, compress 0x0/0x0/0x0, omap 0x179f5, meta 0x2bb860b), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 9396224 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:54.472673+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 9396224 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:55.472855+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 9371648 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:56.472970+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 9355264 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047996 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:57.473107+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 9322496 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fca22000/0x0/0x4ffc00000, data 0x53cf26/0x60a000, compress 0x0/0x0/0x0, omap 0x179f5, meta 0x2bb860b), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:58.473308+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 9322496 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:00:59.473499+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 9322496 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:00.473661+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 9314304 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.739472389s of 10.844255447s, submitted: 58
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:01.473802+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051346 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:02.473926+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53e9e5/0x60d000, compress 0x0/0x0/0x0, omap 0x17e2c, meta 0x2bb81d4), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:03.474059+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53e9e5/0x60d000, compress 0x0/0x0/0x0, omap 0x17e2c, meta 0x2bb81d4), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:04.474213+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:05.474411+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:06.474653+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051346 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:07.474861+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:08.474997+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53e9e5/0x60d000, compress 0x0/0x0/0x0, omap 0x17e2c, meta 0x2bb81d4), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:09.475146+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:10.475291+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:11.475551+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051346 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:12.475838+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:13.476033+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53e9e5/0x60d000, compress 0x0/0x0/0x0, omap 0x17e2c, meta 0x2bb81d4), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53e9e5/0x60d000, compress 0x0/0x0/0x0, omap 0x17e2c, meta 0x2bb81d4), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:14.476202+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:15.476378+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:16.476590+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051346 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:17.476720+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53e9e5/0x60d000, compress 0x0/0x0/0x0, omap 0x17e2c, meta 0x2bb81d4), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:18.476854+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:19.476990+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:20.477124+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 9306112 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.109880447s of 20.121318817s, submitted: 11
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:21.477247+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 9273344 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054170 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:22.477421+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 9273344 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:23.477598+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53eb1b/0x60f000, compress 0x0/0x0/0x0, omap 0x17e2c, meta 0x2bb81d4), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 9273344 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:24.477724+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53eb1b/0x60f000, compress 0x0/0x0/0x0, omap 0x17e2c, meta 0x2bb81d4), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:25.477868+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 9232384 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:26.478012+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 134 handle_osd_map epochs [134,135], i have 135, src has [1,135]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 9216000 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058318 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:27.478140+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca18000/0x0/0x4ffc00000, data 0x5406b5/0x612000, compress 0x0/0x0/0x0, omap 0x180e9, meta 0x2bb7f17), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 9216000 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:28.478280+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 9216000 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:29.478396+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 9216000 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:30.478523+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:31.478660+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059768 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:32.478805+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:33.478933+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x542069/0x613000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:34.479113+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:35.479314+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:36.479447+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059768 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:37.479604+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:38.479729+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x542069/0x613000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:39.479873+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:40.479999+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x542069/0x613000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:41.480167+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x542069/0x613000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059768 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:42.480291+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.286027908s of 21.392595291s, submitted: 50
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:43.480421+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x542069/0x613000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:44.480667+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 9248768 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:45.480821+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 9240576 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:46.480932+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 9240576 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:47.481064+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062576 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 9240576 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:48.481195+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x54219f/0x615000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 9240576 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:49.481324+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 9240576 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:50.481451+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 9216000 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:51.481615+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 9216000 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:52.481802+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064268 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x5421fe/0x616000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.978779793s of 10.000945091s, submitted: 8
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 9216000 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:53.481983+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 9216000 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:54.482139+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 9191424 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:55.482315+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 9191424 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:56.482482+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 9191424 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:57.482621+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066630 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 9191424 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:58.482780+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca15000/0x0/0x4ffc00000, data 0x542299/0x617000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 9191424 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:01:59.482924+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 9191424 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:00.483062+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 9175040 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:01.483237+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 9142272 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:02.483448+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064794 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.141513824s of 10.157646179s, submitted: 7
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 9142272 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:03.483581+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x5421fe/0x616000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 9142272 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:04.483758+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 9076736 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:05.484432+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca15000/0x0/0x4ffc00000, data 0x5421cf/0x616000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 9076736 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:06.484590+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 9076736 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:07.484731+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064954 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x5421cf/0x616000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 9060352 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:08.484896+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 9060352 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:09.485096+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 9052160 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:10.485285+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 9043968 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:11.485459+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 9043968 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:12.485605+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064236 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 9043968 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:13.485723+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x542134/0x615000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 9043968 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:14.485842+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.637335777s of 11.865023613s, submitted: 11
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 9043968 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:15.485977+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Got map version 12
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x542134/0x615000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 7938048 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:16.486100+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:17.486314+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 7938048 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064236 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x542134/0x615000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:18.486536+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 7929856 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca17000/0x0/0x4ffc00000, data 0x542134/0x615000, compress 0x0/0x0/0x0, omap 0x18438, meta 0x2bb7bc8), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:19.486701+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 7929856 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:20.486854+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 7929856 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:21.487000+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 136 handle_osd_map epochs [136,137], i have 137, src has [1,137]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:22.487161+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 7929856 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067554 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:23.487325+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:24.487495+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca13000/0x0/0x4ffc00000, data 0x543d09/0x617000, compress 0x0/0x0/0x0, omap 0x1864c, meta 0x2bb79b4), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:25.487712+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:26.487925+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:27.488091+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067108 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.785848618s of 13.113835335s, submitted: 29
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca13000/0x0/0x4ffc00000, data 0x543d09/0x617000, compress 0x0/0x0/0x0, omap 0x1864c, meta 0x2bb79b4), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:28.488230+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:29.488429+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 7913472 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:30.488643+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 7913472 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca15000/0x0/0x4ffc00000, data 0x543d09/0x617000, compress 0x0/0x0/0x0, omap 0x1864c, meta 0x2bb79b4), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:31.488773+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 7913472 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca10000/0x0/0x4ffc00000, data 0x545788/0x61a000, compress 0x0/0x0/0x0, omap 0x1896c, meta 0x2bb7694), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:32.488984+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 7913472 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069738 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:33.489195+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 7913472 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca10000/0x0/0x4ffc00000, data 0x545788/0x61a000, compress 0x0/0x0/0x0, omap 0x1896c, meta 0x2bb7694), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:34.489418+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 7913472 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:35.489605+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 7913472 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:36.489787+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 7913472 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:37.489980+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 7913472 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068604 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:38.490210+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 7905280 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:39.490424+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 7905280 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca13000/0x0/0x4ffc00000, data 0x5456ed/0x619000, compress 0x0/0x0/0x0, omap 0x1896c, meta 0x2bb7694), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca13000/0x0/0x4ffc00000, data 0x5456ed/0x619000, compress 0x0/0x0/0x0, omap 0x1896c, meta 0x2bb7694), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:40.490655+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 7905280 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.880651474s of 12.901146889s, submitted: 17
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca13000/0x0/0x4ffc00000, data 0x5456ed/0x619000, compress 0x0/0x0/0x0, omap 0x1896c, meta 0x2bb7694), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 138 handle_osd_map epochs [138,139], i have 139, src has [1,139]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:41.490791+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 7872512 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:42.490997+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 7872512 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072066 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:43.491141+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 7872512 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:44.491316+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 7872512 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:45.491503+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 7872512 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:46.491680+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 7872512 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fca10000/0x0/0x4ffc00000, data 0x547322/0x61c000, compress 0x0/0x0/0x0, omap 0x18c30, meta 0x2bb73d0), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 139 handle_osd_map epochs [140,140], i have 140, src has [1,140]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:47.491825+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074696 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:48.491989+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:49.492186+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:50.492346+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 140 handle_osd_map epochs [140,141], i have 141, src has [1,141]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.944893837s of 10.524827957s, submitted: 44
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:51.492516+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:52.492665+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fca07000/0x0/0x4ffc00000, data 0x54aa8d/0x623000, compress 0x0/0x0/0x0, omap 0x1921f, meta 0x2bb6de1), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079018 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:53.492853+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fca07000/0x0/0x4ffc00000, data 0x54aa8d/0x623000, compress 0x0/0x0/0x0, omap 0x1921f, meta 0x2bb6de1), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fca07000/0x0/0x4ffc00000, data 0x54aa8d/0x623000, compress 0x0/0x0/0x0, omap 0x1921f, meta 0x2bb6de1), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:54.493062+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:55.493310+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:56.493601+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 7823360 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:57.493794+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081792 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:58.493939+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:02:59.494097+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:00.494247+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:01.494399+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:02.494545+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081792 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.529521942s of 11.548330307s, submitted: 22
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:03.494716+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:04.494905+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:05.495128+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:06.495316+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:07.495542+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081936 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:08.495739+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:09.495885+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:10.496450+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:11.496662+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:12.496979+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081232 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:13.497193+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:14.497413+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.203193665s of 12.330730438s, submitted: 2
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:15.497672+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:16.497858+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:17.498043+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082908 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:18.498219+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:19.498420+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54c5a7/0x627000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:20.498600+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:21.498778+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:22.498923+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c642/0x628000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 7946240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084456 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:23.499060+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:24.499243+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:25.500276+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:26.500494+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca03000/0x0/0x4ffc00000, data 0x54c6dd/0x629000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:27.500743+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086004 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:28.500946+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.663117409s of 13.674868584s, submitted: 5
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:29.501211+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca03000/0x0/0x4ffc00000, data 0x54c6dd/0x629000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 7921664 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:30.501388+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 7503872 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:31.501630+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 7503872 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Got map version 13
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:32.501756+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085270 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:33.501904+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54c5a7/0x627000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:34.502043+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:35.502261+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:36.502423+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:37.502607+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 7462912 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c5d5/0x627000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085814 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:38.502763+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 7462912 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:39.502937+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 7462912 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.330520630s of 10.966326714s, submitted: 146
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:40.503087+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 7454720 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:41.503272+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 7454720 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54c5d3/0x627000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54c5d3/0x627000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:42.503459+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084090 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:43.503545+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:44.503672+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:45.503850+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:46.504012+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 7446528 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:47.504131+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084106 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:48.504258+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:49.504397+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:50.504553+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:51.504741+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.110977173s of 12.122784615s, submitted: 6
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:52.504893+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084106 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:53.505044+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:54.505195+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:55.505435+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:56.505700+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:57.505839+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084106 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:58.505981+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:03:59.506107+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 7438336 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:00.506259+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 7430144 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:01.506385+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 7430144 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:02.506634+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 7430144 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.884099960s of 10.897595406s, submitted: 2
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084122 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:03.506843+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 7430144 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:04.506992+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 7430144 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:05.507235+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 7430144 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:06.507428+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 7430144 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:07.507585+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 7430144 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084122 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:08.507738+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 7430144 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:09.507887+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 7421952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:10.508015+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 7421952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:11.508206+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 7421952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:12.508380+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54c5a7/0x627000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 7421952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085814 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:13.508518+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.703331947s of 10.725868225s, submitted: 6
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 7397376 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:14.508655+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 7397376 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:15.508818+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 7397376 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:16.508947+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 7397376 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:17.509083+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 7397376 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087330 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:18.509245+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca02000/0x0/0x4ffc00000, data 0x54c70a/0x629000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 7397376 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:19.509387+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 7405568 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:20.509472+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 7421952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:21.509604+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 7421952 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:22.509764+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 7413760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088144 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:23.509881+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.101762772s of 10.280632019s, submitted: 10
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 7413760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:24.510046+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 7413760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:25.510189+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 7413760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:26.510343+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 7413760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:27.510468+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 7413760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086980 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:28.510619+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 7413760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:29.510770+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 7413760 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:30.510950+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 7405568 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:31.511107+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 7405568 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:32.511261+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 7405568 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086022 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:33.511419+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 7536640 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:34.511632+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 7536640 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.128376961s of 11.193220139s, submitted: 8
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:35.511815+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c5d4/0x627000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 7536640 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:36.512023+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 7536640 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:37.512204+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 7536640 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087714 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:38.512488+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 7495680 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:39.512733+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 7495680 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:40.512903+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 7495680 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:41.513114+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54c5d2/0x627000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 7487488 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:42.513366+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 7479296 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088512 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:43.513633+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 7479296 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:44.513934+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c670/0x628000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:45.514214+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:46.514481+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca04000/0x0/0x4ffc00000, data 0x54c66e/0x628000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.873952866s of 11.819092751s, submitted: 15
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:47.514679+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089470 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:48.514944+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54c5a7/0x627000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:49.515209+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:50.515398+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7099 writes, 27K keys, 7099 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7099 writes, 1492 syncs, 4.76 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1308 writes, 3048 keys, 1308 commit groups, 1.0 writes per commit group, ingest: 1.71 MB, 0.00 MB/s
                                           Interval WAL: 1308 writes, 521 syncs, 2.51 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:51.515553+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:52.515832+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088880 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:53.516042+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:54.516312+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:55.516689+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:56.516852+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.971531868s of 10.009545326s, submitted: 5
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:57.517050+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088896 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:58.517267+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 7471104 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:04:59.517501+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc ms_handle_reset ms_handle_reset con 0x558773307c00
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4161918394
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: get_auth_request con 0x558772a55400 auth_method 0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_configure stats_period=5
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 7208960 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:00.517725+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 7225344 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:01.517894+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 7225344 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:02.518111+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [0,0,1])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 7225344 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088896 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:03.518302+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 7225344 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:04.518491+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 ms_handle_reset con 0x558774ceb400 session 0x558773f09500
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: handle_auth_request added challenge on 0x5587760a5000
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 7094272 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:05.518728+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 7233536 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:06.518919+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 7233536 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:07.519099+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 7233536 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088896 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:08.519296+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.829608917s of 11.841785431s, submitted: 2
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fca06000/0x0/0x4ffc00000, data 0x54c50c/0x626000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x2bb6a95), peers [1,2] op hist [0,0,0,1])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 7069696 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:09.519541+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 6750208 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:10.519732+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 5988352 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:11.519911+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 2048000 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:12.520087+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87244800 unmapped: 1671168 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101070 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb7f9000/0x0/0x4ffc00000, data 0x5b88e7/0x693000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x3d56a95), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:13.520249+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87080960 unmapped: 1835008 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:14.520399+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 1802240 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:15.520595+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87359488 unmapped: 1556480 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:16.520782+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 1597440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:17.520960+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb7e0000/0x0/0x4ffc00000, data 0x5d2267/0x6ac000, compress 0x0/0x0/0x0, omap 0x1956b, meta 0x3d56a95), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 1597440 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102366 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:18.521202+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 2.226122856s of 10.100378036s, submitted: 43
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 1654784 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:19.521398+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 1646592 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:20.521544+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb7a0000/0x0/0x4ffc00000, data 0x60e4ca/0x6ea000, compress 0x0/0x0/0x0, omap 0x19783, meta 0x3d5687d), peers [1,2] op hist [0,0,0,0,1])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87359488 unmapped: 1556480 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:21.521742+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 1622016 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:22.521914+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87326720 unmapped: 1589248 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107106 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:23.522055+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 974848 heap: 88915968 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:24.522194+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb76e000/0x0/0x4ffc00000, data 0x64287e/0x71e000, compress 0x0/0x0/0x0, omap 0x19783, meta 0x3d5687d), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 89235456 unmapped: 729088 heap: 89964544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:25.522352+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb6ee000/0x0/0x4ffc00000, data 0x6c0562/0x79d000, compress 0x0/0x0/0x0, omap 0x19783, meta 0x3d5687d), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 434176 heap: 89964544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:26.522515+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 434176 heap: 89964544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb6d4000/0x0/0x4ffc00000, data 0x6da2a6/0x7b7000, compress 0x0/0x0/0x0, omap 0x19783, meta 0x3d5687d), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:27.522667+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 143 handle_osd_map epochs [143,144], i have 144, src has [1,144]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 89284608 unmapped: 2777088 heap: 92061696 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117120 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:28.522800+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.659546852s of 10.024695396s, submitted: 107
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 89178112 unmapped: 2883584 heap: 92061696 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:29.522934+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 89284608 unmapped: 2777088 heap: 92061696 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:30.523118+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 88989696 unmapped: 3072000 heap: 92061696 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb687000/0x0/0x4ffc00000, data 0x724489/0x803000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:31.523577+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 3055616 heap: 92061696 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:32.523811+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb662000/0x0/0x4ffc00000, data 0x74bf73/0x82a000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 90324992 unmapped: 1736704 heap: 92061696 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129242 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:33.523992+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 90652672 unmapped: 1409024 heap: 92061696 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb63c000/0x0/0x4ffc00000, data 0x77037e/0x84f000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:34.524180+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 90669056 unmapped: 1392640 heap: 92061696 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:35.524420+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 2580480 heap: 93110272 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:36.524553+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb604000/0x0/0x4ffc00000, data 0x7a90c4/0x888000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [0,1])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 90685440 unmapped: 2424832 heap: 93110272 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:37.524745+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 90726400 unmapped: 2383872 heap: 93110272 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134428 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:38.524907+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.119792938s of 10.064953804s, submitted: 67
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 90726400 unmapped: 2383872 heap: 93110272 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:39.525172+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 2973696 heap: 94158848 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:40.525323+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb578000/0x0/0x4ffc00000, data 0x83745e/0x914000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 91226112 unmapped: 2932736 heap: 94158848 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:41.525449+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 91234304 unmapped: 2924544 heap: 94158848 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb575000/0x0/0x4ffc00000, data 0x83a809/0x917000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:42.525608+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 91521024 unmapped: 2637824 heap: 94158848 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134906 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:43.525751+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 91414528 unmapped: 2744320 heap: 94158848 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: handle_auth_request added challenge on 0x5587760a5400
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:44.525878+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 1425408 heap: 94158848 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:45.526026+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 2244608 heap: 95207424 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:46.526205+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 2244608 heap: 95207424 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:47.526325+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Got map version 14
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb4e6000/0x0/0x4ffc00000, data 0x8c77bc/0x9a6000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94052352 unmapped: 1155072 heap: 95207424 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150332 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:48.526461+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.701053619s of 10.105629921s, submitted: 89
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 93642752 unmapped: 1564672 heap: 95207424 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:49.526670+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 93716480 unmapped: 1490944 heap: 95207424 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:50.526874+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 93716480 unmapped: 1490944 heap: 95207424 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:51.527089+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 93913088 unmapped: 2342912 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:52.527321+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 93913088 unmapped: 2342912 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148740 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:53.527504+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45d000/0x0/0x4ffc00000, data 0x94fd63/0xa2f000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 93913088 unmapped: 2342912 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:54.527673+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94076928 unmapped: 2179072 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:55.527855+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94076928 unmapped: 2179072 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:56.528113+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94076928 unmapped: 2179072 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:57.528243+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45a000/0x0/0x4ffc00000, data 0x94fd1f/0xa2f000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94076928 unmapped: 2179072 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146978 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:58.528392+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.438689232s of 10.236640930s, submitted: 38
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94085120 unmapped: 2170880 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:05:59.528519+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94085120 unmapped: 2170880 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:00.528653+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94093312 unmapped: 2162688 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:01.528783+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94101504 unmapped: 2154496 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:02.528913+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94101504 unmapped: 2154496 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148112 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:03.529067+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45b000/0x0/0x4ffc00000, data 0x94fd20/0xa2f000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94126080 unmapped: 2129920 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:04.529211+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 2097152 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:05.529398+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94240768 unmapped: 2015232 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:06.529606+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94175232 unmapped: 2080768 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:07.529774+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45b000/0x0/0x4ffc00000, data 0x94fe00/0xa30000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45b000/0x0/0x4ffc00000, data 0x94fe00/0xa30000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94175232 unmapped: 2080768 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149484 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:08.529976+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.595318317s of 10.068367004s, submitted: 91
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2048000 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:09.530127+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45c000/0x0/0x4ffc00000, data 0x94fe0e/0xa30000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:10.530287+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 1998848 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:11.530492+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 2105344 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:12.530739+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 2088960 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150618 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:13.530911+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 2088960 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:14.531078+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb458000/0x0/0x4ffc00000, data 0x94fe2e/0xa30000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 2088960 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:15.531253+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 2088960 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:16.531372+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45c000/0x0/0x4ffc00000, data 0x94fe2c/0xa30000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45c000/0x0/0x4ffc00000, data 0x94fe2c/0xa30000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 2072576 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:17.531508+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 2072576 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150012 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:18.531687+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 2072576 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:19.531868+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45a000/0x0/0x4ffc00000, data 0x94fd1e/0xa2f000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 2072576 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:20.532002+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 2072576 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:21.532153+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.511313438s of 12.622332573s, submitted: 87
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 2072576 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:22.532344+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 2072576 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147938 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:23.532483+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 2072576 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:24.532692+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2064384 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:25.532905+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb460000/0x0/0x4ffc00000, data 0x94fb8c/0xa2c000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2064384 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:26.533095+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2064384 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:27.533261+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2064384 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147332 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:28.533424+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2064384 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:29.533626+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45f000/0x0/0x4ffc00000, data 0x94fc2b/0xa2d000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb45f000/0x0/0x4ffc00000, data 0x94fc2b/0xa2d000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2064384 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:30.533802+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2064384 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:31.533968+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.959608078s of 10.009652138s, submitted: 11
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2064384 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:32.534073+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb460000/0x0/0x4ffc00000, data 0x94fb90/0xa2c000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94199808 unmapped: 2056192 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148896 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:33.534224+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94199808 unmapped: 2056192 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:34.534346+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb460000/0x0/0x4ffc00000, data 0x94fb90/0xa2c000, compress 0x0/0x0/0x0, omap 0x19ace, meta 0x3d56532), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94199808 unmapped: 2056192 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:35.534509+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 144 handle_osd_map epochs [144,145], i have 145, src has [1,145]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2048000 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:36.534636+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2048000 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:37.534784+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2048000 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:38.534943+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151784 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fb45c000/0x0/0x4ffc00000, data 0x9516fa/0xa2e000, compress 0x0/0x0/0x0, omap 0x19d9c, meta 0x3d56264), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2048000 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:39.535109+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2048000 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:40.535254+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2048000 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:41.535415+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2048000 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:42.535642+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.947566986s of 11.059520721s, submitted: 23
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 2039808 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:43.535851+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154414 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb459000/0x0/0x4ffc00000, data 0x953179/0xa31000, compress 0x0/0x0/0x0, omap 0x1a031, meta 0x3d55fcf), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 2039808 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:44.536063+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 2039808 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:45.536276+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb459000/0x0/0x4ffc00000, data 0x953179/0xa31000, compress 0x0/0x0/0x0, omap 0x1a031, meta 0x3d55fcf), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 2039808 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:46.536469+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 2039808 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:47.536636+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:48.536779+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 2039808 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154414 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb459000/0x0/0x4ffc00000, data 0x953179/0xa31000, compress 0x0/0x0/0x0, omap 0x1a031, meta 0x3d55fcf), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:49.536914+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:50.537060+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:51.537243+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb459000/0x0/0x4ffc00000, data 0x953179/0xa31000, compress 0x0/0x0/0x0, omap 0x1a031, meta 0x3d55fcf), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:52.537611+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:53.537818+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154414 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.746775627s of 10.756615639s, submitted: 11
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:54.538010+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:55.538199+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb459000/0x0/0x4ffc00000, data 0x953179/0xa31000, compress 0x0/0x0/0x0, omap 0x1a031, meta 0x3d55fcf), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:56.538335+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:57.538499+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 2031616 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:58.538651+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94240768 unmapped: 2015232 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153854 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:06:59.538799+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94240768 unmapped: 2015232 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb45b000/0x0/0x4ffc00000, data 0x953179/0xa31000, compress 0x0/0x0/0x0, omap 0x1a031, meta 0x3d55fcf), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:00.539038+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94240768 unmapped: 2015232 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:01.539180+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 2007040 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:02.539343+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 2007040 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb45a000/0x0/0x4ffc00000, data 0x95328b/0xa32000, compress 0x0/0x0/0x0, omap 0x1a031, meta 0x3d55fcf), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:03.539529+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1990656 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156504 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:04.539702+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1990656 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.206610680s of 11.225436211s, submitted: 8
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:05.539888+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1990656 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:06.540068+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1990656 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb45b000/0x0/0x4ffc00000, data 0x953179/0xa31000, compress 0x0/0x0/0x0, omap 0x1a031, meta 0x3d55fcf), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:07.540227+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1990656 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:08.540400+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1990656 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153838 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:09.540610+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1990656 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:10.540761+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1990656 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:11.540936+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb45b000/0x0/0x4ffc00000, data 0x953179/0xa31000, compress 0x0/0x0/0x0, omap 0x1a031, meta 0x3d55fcf), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1990656 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:12.541137+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1974272 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:13.541293+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1974272 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158880 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:14.541492+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1974272 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:15.541784+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1974272 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:16.541943+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1974272 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb455000/0x0/0x4ffc00000, data 0x954e19/0xa35000, compress 0x0/0x0/0x0, omap 0x1a302, meta 0x3d55cfe), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:17.542150+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1974272 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb455000/0x0/0x4ffc00000, data 0x954e19/0xa35000, compress 0x0/0x0/0x0, omap 0x1a302, meta 0x3d55cfe), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.126583099s of 13.255347252s, submitted: 23
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:18.542298+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1966080 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161510 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:19.542448+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1966080 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb452000/0x0/0x4ffc00000, data 0x956898/0xa38000, compress 0x0/0x0/0x0, omap 0x1a576, meta 0x3d55a8a), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:20.542619+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1966080 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:21.542792+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1966080 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:22.543050+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1957888 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:23.544156+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1957888 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160344 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb455000/0x0/0x4ffc00000, data 0x9567fd/0xa37000, compress 0x0/0x0/0x0, omap 0x1a576, meta 0x3d55a8a), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:24.544397+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1957888 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:25.545239+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1957888 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb455000/0x0/0x4ffc00000, data 0x9567fd/0xa37000, compress 0x0/0x0/0x0, omap 0x1a576, meta 0x3d55a8a), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:26.545983+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb455000/0x0/0x4ffc00000, data 0x9567fd/0xa37000, compress 0x0/0x0/0x0, omap 0x1a576, meta 0x3d55a8a), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1949696 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:27.546393+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1949696 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:28.546909+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1949696 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163838 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb450000/0x0/0x4ffc00000, data 0x958402/0xa3a000, compress 0x0/0x0/0x0, omap 0x1a84a, meta 0x3d557b6), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:29.547441+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1949696 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:30.547860+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1949696 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:31.548264+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1949696 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb450000/0x0/0x4ffc00000, data 0x958402/0xa3a000, compress 0x0/0x0/0x0, omap 0x1a84a, meta 0x3d557b6), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:32.548632+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1949696 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.989291191s of 15.049725533s, submitted: 32
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:33.548805+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 1884160 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166468 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:34.548935+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 1884160 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:35.549103+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 1884160 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:36.549253+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 1884160 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:37.549387+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fb44f000/0x0/0x4ffc00000, data 0x959e81/0xa3d000, compress 0x0/0x0/0x0, omap 0x1ab96, meta 0x3d5546a), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 1884160 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:38.549529+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 1884160 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165924 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:39.549826+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 1884160 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:40.550003+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:41.550176+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fb44f000/0x0/0x4ffc00000, data 0x959e81/0xa3d000, compress 0x0/0x0/0x0, omap 0x1ab96, meta 0x3d5546a), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:42.550316+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:43.551363+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165908 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:44.551600+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:45.551811+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:46.551972+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:47.552168+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fb44f000/0x0/0x4ffc00000, data 0x959e81/0xa3d000, compress 0x0/0x0/0x0, omap 0x1ab96, meta 0x3d5546a), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.829722404s of 14.890094757s, submitted: 14
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:48.552364+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165908 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:49.552552+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:50.552787+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 1875968 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fb44f000/0x0/0x4ffc00000, data 0x959e81/0xa3d000, compress 0x0/0x0/0x0, omap 0x1ab96, meta 0x3d5546a), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:51.552936+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94388224 unmapped: 1867776 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:52.553073+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 1859584 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb44c000/0x0/0x4ffc00000, data 0x95ba86/0xa40000, compress 0x0/0x0/0x0, omap 0x1adb3, meta 0x3d5524d), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:53.553204+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 1859584 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168666 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:54.553392+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 1859584 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:55.553622+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 1859584 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:56.553776+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 1859584 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:57.553963+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 1859584 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 151 handle_osd_map epochs [151,152], i have 152, src has [1,152]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.087899208s of 10.164388657s, submitted: 26
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:58.554132+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fb44c000/0x0/0x4ffc00000, data 0x95ba86/0xa40000, compress 0x0/0x0/0x0, omap 0x1adb3, meta 0x3d5524d), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 1851392 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172016 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:07:59.554312+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 1851392 heap: 96256000 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:00.554511+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95461376 unmapped: 1843200 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb442000/0x0/0x4ffc00000, data 0x95f10a/0xa46000, compress 0x0/0x0/0x0, omap 0x1b302, meta 0x3d54cfe), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:01.554658+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1818624 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:02.554881+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1818624 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Got map version 15
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:03.555076+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 1810432 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1175762 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:04.555234+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 1810432 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:05.555491+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 1810432 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:06.555651+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 1810432 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb445000/0x0/0x4ffc00000, data 0x95f1a5/0xa47000, compress 0x0/0x0/0x0, omap 0x1b302, meta 0x3d54cfe), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:07.555784+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95518720 unmapped: 1785856 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:08.555923+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95576064 unmapped: 1728512 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.972333908s of 10.498694420s, submitted: 93
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185732 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:09.556068+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95666176 unmapped: 1638400 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:10.556205+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95666176 unmapped: 1638400 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:11.556394+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fb43b000/0x0/0x4ffc00000, data 0x964595/0xa51000, compress 0x0/0x0/0x0, omap 0x1bb4d, meta 0x3d544b3), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95666176 unmapped: 1638400 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:12.556549+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95666176 unmapped: 1638400 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:13.556748+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 95666176 unmapped: 1638400 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186844 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:14.556961+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96714752 unmapped: 589824 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:15.557173+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96722944 unmapped: 581632 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fb433000/0x0/0x4ffc00000, data 0x967b73/0xa55000, compress 0x0/0x0/0x0, omap 0x1c224, meta 0x3d53ddc), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:16.557356+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96722944 unmapped: 581632 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:17.557553+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96722944 unmapped: 581632 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:18.557754+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96722944 unmapped: 581632 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190082 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:19.557927+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96722944 unmapped: 581632 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:20.558112+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96722944 unmapped: 581632 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:21.558326+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fb433000/0x0/0x4ffc00000, data 0x967b73/0xa55000, compress 0x0/0x0/0x0, omap 0x1c224, meta 0x3d53ddc), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96722944 unmapped: 581632 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fb433000/0x0/0x4ffc00000, data 0x967b73/0xa55000, compress 0x0/0x0/0x0, omap 0x1c224, meta 0x3d53ddc), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:22.558536+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96722944 unmapped: 581632 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.193332672s of 14.270849228s, submitted: 49
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fb433000/0x0/0x4ffc00000, data 0x967b73/0xa55000, compress 0x0/0x0/0x0, omap 0x1c224, meta 0x3d53ddc), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:23.558682+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96387072 unmapped: 917504 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192536 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:24.558889+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96387072 unmapped: 917504 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:25.559095+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96387072 unmapped: 917504 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fb432000/0x0/0x4ffc00000, data 0x969612/0xa58000, compress 0x0/0x0/0x0, omap 0x1c612, meta 0x3d539ee), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:26.559261+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96387072 unmapped: 917504 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:27.559416+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96387072 unmapped: 917504 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:28.559548+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96387072 unmapped: 917504 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195166 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:29.559753+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96387072 unmapped: 917504 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14940 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:30.559942+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 160 ms_handle_reset con 0x5587760a5400 session 0x558775b77a40
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 671744 heap: 97304576 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:31.560135+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb431000/0x0/0x4ffc00000, data 0x96b217/0xa5b000, compress 0x0/0x0/0x0, omap 0x1c834, meta 0x3d537cc), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:32.560317+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Got map version 16
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.137248993s of 10.324272156s, submitted: 247
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:33.560533+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201338 data_alloc: 218103808 data_used: 5067
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:34.560750+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:35.560970+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:36.561162+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fb42b000/0x0/0x4ffc00000, data 0x96e8f7/0xa61000, compress 0x0/0x0/0x0, omap 0x1cd9a, meta 0x3d53266), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:37.561323+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:38.561518+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 162 handle_osd_map epochs [162,163], i have 163, src has [1,163]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203776 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fb42b000/0x0/0x4ffc00000, data 0x96e8f7/0xa61000, compress 0x0/0x0/0x0, omap 0x1cd9a, meta 0x3d53266), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:39.561648+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:40.561838+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:41.562052+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:42.562247+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:43.562434+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203616 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:44.562676+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:45.562888+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:46.563144+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:47.563308+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:48.563472+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203616 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:49.563610+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:50.563774+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:51.563995+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:52.564124+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:53.564508+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203616 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:54.564691+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:55.564902+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:56.565087+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:57.565250+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:58.565392+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203616 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:08:59.565618+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:00.565792+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:01.565977+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:02.566166+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:03.566334+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.618175507s of 30.404338837s, submitted: 25
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205308 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:04.566524+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb425000/0x0/0x4ffc00000, data 0x970431/0xa65000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:05.566780+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:06.566935+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb425000/0x0/0x4ffc00000, data 0x970431/0xa65000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:07.567109+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:08.567277+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205308 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:09.567433+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:10.567729+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb425000/0x0/0x4ffc00000, data 0x970431/0xa65000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:11.567838+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1687552 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:12.568003+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:13.568182+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206280 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.337510109s of 10.493035316s, submitted: 2
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x9704cc/0xa66000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [0,0,0,0,1])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:14.568428+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x970431/0xa65000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:15.568674+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb428000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:16.568850+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:17.569021+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:18.569169+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204030 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb428000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:19.569357+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:20.569550+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb428000/0x0/0x4ffc00000, data 0x970396/0xa64000, compress 0x0/0x0/0x0, omap 0x1d1d0, meta 0x3d52e30), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:21.569761+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:22.569931+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:23.570133+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb423000/0x0/0x4ffc00000, data 0x971f9b/0xa67000, compress 0x0/0x0/0x0, omap 0x1d3f4, meta 0x3d52c0c), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207492 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:24.570327+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:25.570519+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:26.570735+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:27.570965+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb423000/0x0/0x4ffc00000, data 0x971f9b/0xa67000, compress 0x0/0x0/0x0, omap 0x1d3f4, meta 0x3d52c0c), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:28.571117+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207492 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:29.571303+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:30.571487+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:31.571629+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:32.571781+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb423000/0x0/0x4ffc00000, data 0x971f9b/0xa67000, compress 0x0/0x0/0x0, omap 0x1d3f4, meta 0x3d52c0c), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:33.571948+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb423000/0x0/0x4ffc00000, data 0x971f9b/0xa67000, compress 0x0/0x0/0x0, omap 0x1d3f4, meta 0x3d52c0c), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207492 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:34.572113+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:35.572278+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:36.572551+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:37.572820+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:38.573020+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.002613068s of 24.275821686s, submitted: 34
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:39.573182+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:40.573320+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:41.573465+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:42.573620+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:43.573747+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:44.573937+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:45.574164+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:46.574349+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:47.574531+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _renew_subs
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:48.574742+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:49.574888+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:50.575054+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:51.575203+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:52.575376+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:53.575497+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:54.575736+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:55.575905+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:56.576129+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:57.576270+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:58.576482+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:09:59.576639+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:00.576777+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:01.576890+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:02.577060+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:03.577256+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:04.577509+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:05.577807+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:06.578021+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:07.578201+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:08.578438+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:09.578669+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:10.578808+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:11.578953+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:12.579141+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:13.579280+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:14.579419+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:15.579609+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:16.579756+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:17.579907+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:18.580067+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:19.580236+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:20.580379+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:21.580515+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:22.580698+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:23.580850+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1703936 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:24.580969+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1695744 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:25.581119+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1695744 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:26.581255+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1695744 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:27.581481+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1695744 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:28.581671+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1695744 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210122 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:29.581858+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb420000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1695744 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:30.582084+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 52.165187836s of 52.175907135s, submitted: 10
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 ms_handle_reset con 0x5587760a4400 session 0x5587758f8380
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:31.582292+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:32.582448+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Got map version 17
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:33.582604+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:34.582718+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:35.582863+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:36.582994+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:37.583177+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:38.583365+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:39.583499+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:40.583654+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:41.591584+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:42.591746+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:43.591868+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 1482752 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:44.592017+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 96903168 unmapped: 1449984 heap: 98353152 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:45.592188+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'config diff' '{prefix=config diff}'
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'config show' '{prefix=config show}'
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'counter dump' '{prefix=counter dump}'
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'counter schema' '{prefix=counter schema}'
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97394688 unmapped: 2007040 heap: 99401728 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:46.592313+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2195456 heap: 99401728 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:47.592428+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97509376 unmapped: 1892352 heap: 99401728 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:48.592539+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'log dump' '{prefix=log dump}'
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'perf dump' '{prefix=perf dump}'
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97435648 unmapped: 13008896 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'perf schema' '{prefix=perf schema}'
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:49.592728+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:50.592938+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:51.593090+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:52.593228+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:53.593470+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:54.593684+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:55.594030+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:56.594169+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:57.594413+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:58.594672+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:10:59.594872+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:00.595028+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:01.595195+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:02.595400+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:03.595607+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:04.595778+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:05.596006+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:06.596216+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:07.596656+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:08.596843+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:09.597015+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:10.597176+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:11.597319+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:12.597766+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:13.598160+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:14.598386+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:15.598594+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:16.598755+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:17.599005+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:18.599155+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:19.599456+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:20.599628+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:21.599747+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:22.599907+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:23.600056+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:24.600244+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:25.600484+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:26.600696+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:27.601029+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:28.601160+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:29.601307+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:30.601445+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:31.601633+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:32.601792+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:33.601926+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:34.602084+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:35.602282+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:36.602455+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:37.602623+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:38.602750+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:39.602901+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:40.603032+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:41.603172+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:42.603306+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:43.603499+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:44.603670+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:45.603847+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:46.604037+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:47.604161+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:48.604287+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:49.604513+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:50.604694+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:51.604902+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:52.605057+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:53.605209+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:54.605507+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:55.607222+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:56.607410+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:57.607641+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:58.607815+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:11:59.607970+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:00.608086+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:01.608543+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:02.608788+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:03.608966+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:04.609128+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:05.609531+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:06.609696+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:07.609825+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:08.609974+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:09.610103+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:10.610345+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:11.610613+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:12.610736+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:13.610921+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:14.611095+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:15.611269+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:16.611464+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:17.611655+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:18.611843+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:19.612050+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:20.612182+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:21.612370+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:22.612602+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:23.612733+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:24.612875+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:25.613060+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:26.613221+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:27.613393+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:28.613540+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:29.613711+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:30.613914+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:31.614123+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:32.614330+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:33.614523+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:34.614707+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:35.614893+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:36.615084+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:37.615299+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:38.615524+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:39.615722+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:40.615908+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:41.616061+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:42.616287+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:43.616470+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:44.616648+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:45.616808+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:46.616945+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:47.617097+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:48.617234+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:49.617372+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:50.617520+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:51.617722+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:52.617909+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:53.618061+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:54.618175+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:55.618312+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:56.618411+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:57.618551+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:58.618737+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:12:59.618866+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:00.618976+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:01.619116+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:02.619259+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:03.619359+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:04.619456+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:05.619641+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:06.619762+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:07.619904+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:08.620075+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:09.620235+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:10.620381+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:11.620512+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:12.620635+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:13.620755+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:14.620897+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:15.621091+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:16.621223+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:17.621374+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:18.621502+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:19.621635+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:20.621769+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:21.621917+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:22.622082+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:23.622240+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:24.622430+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:25.622623+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:26.630661+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:27.630789+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:28.630983+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:29.631105+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:30.631248+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:31.631428+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:32.631632+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:33.631810+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:34.631941+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:35.632103+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:36.632316+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:37.632485+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:38.632663+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:39.632898+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:40.633014+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:41.633126+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:42.633337+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:43.633510+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:44.633668+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:45.633872+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:46.634015+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:47.634185+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:48.634361+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:49.634511+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:50.634669+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:51.634891+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:52.635055+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97705984 unmapped: 12738560 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:53.635245+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97705984 unmapped: 12738560 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:54.635374+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97705984 unmapped: 12738560 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:55.635649+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97705984 unmapped: 12738560 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:56.635805+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97705984 unmapped: 12738560 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:57.635990+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97705984 unmapped: 12738560 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:58.636164+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97705984 unmapped: 12738560 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:13:59.636356+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97705984 unmapped: 12738560 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:00.636510+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12795904 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:01.636679+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12795904 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:02.636824+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12795904 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:03.636956+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12795904 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:04.637110+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12795904 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:05.637314+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12795904 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:06.637473+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12795904 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:07.637712+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12795904 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:08.637875+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12795904 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:09.638078+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12795904 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:10.638200+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12795904 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:11.638345+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12795904 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:12.638516+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12795904 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:13.638689+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12795904 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:14.638848+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12795904 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:15.639060+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12795904 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:16.639238+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:17.639397+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:18.639610+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12787712 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:19.642624+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:20.642841+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:21.643254+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:22.643634+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:23.643786+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:24.643995+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:25.644489+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:26.644855+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:27.645166+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:28.645337+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:29.645495+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:30.645620+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:31.645851+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:32.646015+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:33.646173+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:34.646335+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:35.646517+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:36.646676+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:37.646859+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:38.647064+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:39.647274+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:40.647428+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12779520 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:41.647609+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:42.647769+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:43.647912+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:44.648021+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:45.648183+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:46.648340+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:47.648518+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:48.648685+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:49.648843+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12771328 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:50.648983+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 9140 writes, 33K keys, 9140 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9140 writes, 2255 syncs, 4.05 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2041 writes, 5837 keys, 2041 commit groups, 1.0 writes per commit group, ingest: 7.08 MB, 0.01 MB/s
                                           Interval WAL: 2041 writes, 763 syncs, 2.67 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:51.649162+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:52.649312+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:53.649447+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:54.649618+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:55.649784+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:56.649902+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:57.650062+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:58.650210+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:14:59.650363+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:00.650491+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12763136 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:01.650656+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:02.650849+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:03.651080+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:04.651216+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:05.652054+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:06.652219+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:07.652371+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:08.652549+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:09.652743+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:10.653283+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:11.653431+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:12.653584+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:13.653823+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:14.653974+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:15.654124+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:16.654255+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:17.654395+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:18.654550+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:19.654741+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:20.654886+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:21.655057+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:22.655195+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:23.655344+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:24.655477+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:25.655657+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:26.655823+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:27.655968+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:28.656114+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:29.656325+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:30.656458+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:31.656592+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 12754944 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:32.656737+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:33.656887+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:34.657014+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:35.657229+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:36.657420+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:37.657635+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:38.657769+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:39.657904+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:40.658047+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:41.658191+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:42.658351+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:43.658646+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:44.658934+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:45.659072+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:46.659254+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:47.659390+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:48.659621+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:49.659800+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:50.659928+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:51.660095+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:52.660218+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:53.660399+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:54.660593+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:55.660756+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:56.660873+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:57.660989+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:58.661369+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:15:59.661621+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:00.661750+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:01.661877+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:02.662105+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:03.662256+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 333.777954102s of 333.794189453s, submitted: 144
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:04.662406+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12746752 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:05.662603+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97746944 unmapped: 12697600 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:06.662749+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97779712 unmapped: 12664832 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:07.662937+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97779712 unmapped: 12664832 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:08.663114+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97779712 unmapped: 12664832 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:09.663276+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:10.663467+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:11.663719+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:12.663855+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:13.664004+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:14.664177+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:15.664424+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:16.664605+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:17.664752+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:18.664899+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:19.665074+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:20.665210+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:21.665346+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:22.665501+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:23.665678+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:24.665884+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:25.666075+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:26.666270+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:27.666415+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:28.666586+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12648448 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:29.666735+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97812480 unmapped: 12632064 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:30.666913+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97812480 unmapped: 12632064 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:31.667076+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97812480 unmapped: 12632064 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:32.667223+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97820672 unmapped: 12623872 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:33.667377+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97820672 unmapped: 12623872 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:34.667516+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97820672 unmapped: 12623872 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:35.667758+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97820672 unmapped: 12623872 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:36.668064+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97820672 unmapped: 12623872 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:37.668330+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97820672 unmapped: 12623872 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:38.668491+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97820672 unmapped: 12623872 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:39.668661+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97820672 unmapped: 12623872 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:40.668835+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97820672 unmapped: 12623872 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:41.668984+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97820672 unmapped: 12623872 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:42.669254+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97820672 unmapped: 12623872 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:43.669468+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97820672 unmapped: 12623872 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:44.669623+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97820672 unmapped: 12623872 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:45.669805+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97820672 unmapped: 12623872 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:46.669928+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97820672 unmapped: 12623872 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:47.670101+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97820672 unmapped: 12623872 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:48.670263+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97820672 unmapped: 12623872 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:49.670730+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:50.670882+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:51.671102+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:52.671252+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:53.671362+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:54.671478+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:55.671621+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:56.671740+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:57.671901+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:58.672029+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:16:59.672218+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:00.672371+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:01.672503+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:02.672669+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:03.674458+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:04.675624+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:05.676374+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:06.676722+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:07.676882+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:08.677288+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 12607488 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:09.677514+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:10.677718+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:11.677992+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:12.678150+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:13.678525+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:14.678944+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:15.679158+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:16.679412+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:17.679531+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:18.679782+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:19.679968+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:20.680157+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:21.680299+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:22.680500+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:23.680686+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:24.680931+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:25.681139+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:26.681343+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:27.681494+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:28.681676+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:29.681842+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:30.682056+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:31.682211+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:32.682372+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:33.682496+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:34.682658+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:35.682834+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:36.682989+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:37.683143+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:38.683303+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:39.683463+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:40.683628+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:41.683797+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:42.683925+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:43.684094+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:44.684276+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:45.684477+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:46.684647+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:47.684804+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:48.684945+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:49.685098+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:50.685288+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:51.685524+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:52.685720+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:53.685864+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:54.686019+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:55.686199+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97853440 unmapped: 12591104 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:56.686331+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:57.686464+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:58.686635+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:17:59.686752+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:00.686870+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:01.687017+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:02.687138+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:03.687263+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:04.687408+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:05.687636+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:06.687811+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:07.687936+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:08.688112+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:09.688279+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:10.688415+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:11.688554+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:12.688776+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:13.689015+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:14.689136+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:15.689291+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:16.689410+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:17.689539+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:18.689675+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:19.689827+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:20.689957+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:21.690120+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:22.690301+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:23.690467+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:24.690591+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:25.690747+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:26.690885+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:27.691033+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:28.691127+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:29.691251+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:30.691404+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:31.691543+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:32.691721+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:33.691872+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 12574720 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:34.691991+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12566528 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:35.692152+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12566528 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:36.692262+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12566528 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:37.692371+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12566528 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:38.692616+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12566528 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:39.692883+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12566528 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:40.693199+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12566528 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:41.693401+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12566528 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:42.693618+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12566528 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:43.693883+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12566528 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:44.694078+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12566528 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:45.694268+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12566528 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:46.694464+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12566528 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:47.694594+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12566528 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:48.694794+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12566528 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:49.694936+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12566528 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:50.695090+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 12566528 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:51.695244+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:52.695360+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:53.695495+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:54.695680+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:55.695887+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:56.696067+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:57.696238+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:58.696390+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:18:59.696542+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:00.696930+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:01.697062+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:02.697246+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:03.697394+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:04.697658+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:05.697851+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:06.697977+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:07.698154+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:08.698277+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:09.698464+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:10.698616+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:11.698767+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:12.698934+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:13.699058+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:14.699230+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:15.699463+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:16.699610+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:17.699753+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:18.699897+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:19.700038+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:20.700164+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:21.700299+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:22.700447+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:23.700628+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:24.700776+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:25.700955+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 12541952 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:26.701105+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:27.701243+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:28.701371+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:29.701604+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:30.701816+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:31.702074+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:32.702225+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:33.702452+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:34.702612+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:35.702767+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:36.702905+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:37.703049+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:38.703183+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:39.703376+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:40.703515+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:41.703719+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:42.703881+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:43.704049+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:44.704186+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:45.704345+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 12533760 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:46.704770+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:47.705072+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:48.705344+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:49.705511+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:50.705657+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets getting new tickets!
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:51.705966+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _finish_auth 0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:51.707196+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:52.706139+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:53.706272+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:54.706482+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:55.706668+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:56.706786+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:57.706943+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:58.707111+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 12558336 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc ms_handle_reset ms_handle_reset con 0x558772a55400
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4161918394
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4161918394,v1:192.168.122.100:6801/4161918394]
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: get_auth_request con 0x558776459800 auth_method 0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: mgrc handle_mgr_configure stats_period=5
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:19:59.707307+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97984512 unmapped: 12460032 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:00.707472+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97984512 unmapped: 12460032 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:01.707625+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97976320 unmapped: 12468224 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:02.707781+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97976320 unmapped: 12468224 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:03.707981+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97976320 unmapped: 12468224 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 ms_handle_reset con 0x5587760a5000 session 0x5587741ccfc0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: handle_auth_request added challenge on 0x5587760a4400
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:04.708180+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97976320 unmapped: 12468224 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:05.708345+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97845248 unmapped: 12599296 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:06.708473+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97845248 unmapped: 12599296 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:07.708613+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97845248 unmapped: 12599296 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:08.709607+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97845248 unmapped: 12599296 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:09.709720+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97845248 unmapped: 12599296 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'config diff' '{prefix=config diff}'
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'config show' '{prefix=config show}'
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'counter dump' '{prefix=counter dump}'
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:10.709850+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'counter schema' '{prefix=counter schema}'
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97615872 unmapped: 12828672 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:11.709989+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 19:20:43 compute-0 ceph-osd[85953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 19:20:43 compute-0 ceph-osd[85953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209402 data_alloc: 218103808 data_used: 5339
Jan 23 19:20:43 compute-0 ceph-osd[85953]: prioritycache tune_memory target: 4294967296 mapped: 97804288 unmapped: 12640256 heap: 110444544 old mem: 2845415832 new mem: 2845415832
Jan 23 19:20:43 compute-0 ceph-osd[85953]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb422000/0x0/0x4ffc00000, data 0x973a1a/0xa6a000, compress 0x0/0x0/0x0, omap 0x1d735, meta 0x3d528cb), peers [1,2] op hist [])
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: tick
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_tickets
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T19:20:12.710123+0000)
Jan 23 19:20:43 compute-0 ceph-osd[85953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 19:20:43 compute-0 ceph-osd[85953]: do_command 'log dump' '{prefix=log dump}'
Jan 23 19:20:43 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 19:20:43 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 23 19:20:43 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3598585463' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 23 19:20:43 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2116450636' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 23 19:20:43 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/268649211' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 23 19:20:43 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14944 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:44 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14946 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:44 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14948 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} v 0)
Jan 23 19:20:44 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} : dispatch
Jan 23 19:20:44 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14952 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:44 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} v 0)
Jan 23 19:20:44 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} : dispatch
Jan 23 19:20:44 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14950 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:44 compute-0 ceph-mon[75097]: pgmap v1542: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:44 compute-0 ceph-mon[75097]: from='client.14940 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:44 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3598585463' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 23 19:20:44 compute-0 ceph-mon[75097]: from='client.14944 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:44 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} : dispatch
Jan 23 19:20:44 compute-0 ceph-mon[75097]: from='mgr.14122 192.168.122.100:0/587667717' entity='mgr.compute-0.rszfwe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kehgrd", "name": "rgw_frontends"} : dispatch
Jan 23 19:20:45 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:45 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14954 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:45 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14958 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:45 compute-0 ceph-mon[75097]: from='client.14946 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:45 compute-0 ceph-mon[75097]: from='client.14948 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:45 compute-0 ceph-mon[75097]: from='client.14952 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:45 compute-0 ceph-mon[75097]: from='client.14950 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:45 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 23 19:20:45 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3993707037' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 23 19:20:45 compute-0 systemd[1]: Starting Hostname Service...
Jan 23 19:20:46 compute-0 systemd[1]: Started Hostname Service.
Jan 23 19:20:46 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14962 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:46 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Jan 23 19:20:46 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2091586298' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 23 19:20:46 compute-0 ceph-mon[75097]: pgmap v1543: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:46 compute-0 ceph-mon[75097]: from='client.14954 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:46 compute-0 ceph-mon[75097]: from='client.14958 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:46 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/3993707037' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 23 19:20:46 compute-0 ceph-mon[75097]: from='client.14962 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 19:20:46 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2091586298' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 23 19:20:47 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 23 19:20:47 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/170116729' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 23 19:20:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 23 19:20:47 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2156580223' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 23 19:20:47 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 19:20:47 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 19:20:47 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 19:20:47 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 19:20:47 compute-0 ceph-mon[75097]: pgmap v1544: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:47 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/170116729' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 23 19:20:47 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2156580223' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 23 19:20:47 compute-0 ceph-mon[75097]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 19:20:47 compute-0 ceph-mon[75097]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 19:20:47 compute-0 ceph-mon[75097]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 19:20:47 compute-0 ceph-mon[75097]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 19:20:47 compute-0 ceph-mon[75097]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 19:20:48 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 23 19:20:48 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1784678012' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 23 19:20:48 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14980 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:48 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1784678012' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 23 19:20:48 compute-0 ceph-mon[75097]: from='client.14980 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:49 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 23 19:20:49 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2717097187' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 23 19:20:49 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Jan 23 19:20:49 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/471496' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 23 19:20:50 compute-0 ceph-mon[75097]: pgmap v1545: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:50 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2717097187' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 23 19:20:50 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/471496' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 23 19:20:50 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 23 19:20:50 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/607173921' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 23 19:20:51 compute-0 ceph-mgr[75390]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:51 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 23 19:20:51 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1354813164' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 23 19:20:51 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/607173921' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 23 19:20:51 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/1354813164' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 23 19:20:51 compute-0 ceph-mgr[75390]: log_channel(audit) log [DBG] : from='client.14990 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:52 compute-0 ceph-mon[75097]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 23 19:20:52 compute-0 ceph-mon[75097]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2199917155' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 23 19:20:52 compute-0 ceph-mon[75097]: pgmap v1546: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 23 19:20:52 compute-0 ceph-mon[75097]: from='client.14990 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 19:20:52 compute-0 ceph-mon[75097]: from='client.? 192.168.122.100:0/2199917155' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
